title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 5. Adding TLS Certificates to the Red Hat Quay Container | Chapter 5. Adding TLS Certificates to the Red Hat Quay Container To add custom TLS certificates to Red Hat Quay, create a new directory named extra_ca_certs/ beneath the Red Hat Quay config directory. Copy any required site-specific TLS certificates to this new directory. 5.1. Add TLS certificates to Red Hat Quay View certificate to be added to the container Create certs directory and copy certificate there Obtain the Quay container's CONTAINER ID with podman ps : Restart the container with that ID: Examine the certificate copied into the container namespace: 5.2. Adding custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes When deployed on Kubernetes, Red Hat Quay mounts in a secret as a volume to store config assets. Currently, this breaks the upload certificate function of the superuser panel. As a temporary workaround, base64 encoded certificates can be added to the secret after Red Hat Quay has been deployed. Use the following procedure to add custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes. Prerequisites Red Hat Quay has been deployed. You have a custom ca.crt file. Procedure Base64 encode the contents of an SSL/TLS certificate by entering the following command: USD cat ca.crt | base64 -w 0 Example output ...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Enter the following kubectl command to edit the quay-enterprise-config-secret file: USD kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret Add an entry for the certificate and paste the full base64 encoded stringer under the entry. For example: custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Use the kubectl delete command to remove all Red Hat Quay pods. For example: USD kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms Afterwards, the Red Hat Quay deployment automatically schedules replace pods with the new certificate data. | [
"cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----",
"mkdir -p quay/config/extra_ca_certs cp storage.crt quay/config/extra_ca_certs/ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt",
"sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.10.9 \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller",
"sudo podman restart 5a3e82c4a75f",
"sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV",
"cat ca.crt | base64 -w 0",
"...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret",
"custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/config-custom-ssl-certs-manual |
Deploying into JBoss EAP | Deploying into JBoss EAP Red Hat Fuse 7.13 Deploy application packages into the JBoss Enterprise Application Platform (EAP) container Red Hat Fuse Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_jboss_eap/index |
Chapter 4. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules | Chapter 4. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules As a storage administrator, you can use cephadm-ansible modules in Ansible playbooks to administer your Red Hat Ceph Storage cluster. The cephadm-ansible package provides several modules that wrap cephadm calls to let you write your own unique Ansible playbooks to administer your cluster. Note At this time, cephadm-ansible modules only support the most important tasks. Any operation not covered by cephadm-ansible modules must be completed using either the command or shell Ansible modules in your playbooks. 4.1. The cephadm-ansible modules The cephadm-ansible modules are a collection of modules that simplify writing Ansible playbooks by providing a wrapper around cephadm and ceph orch commands. You can use the modules to write your own unique Ansible playbooks to administer your cluster using one or more of the modules. The cephadm-ansible package includes the following modules: cephadm_bootstrap ceph_orch_host ceph_config ceph_orch_apply ceph_orch_daemon cephadm_registry_login 4.2. The cephadm-ansible modules options The following tables list the available options for the cephadm-ansible modules. Options listed as required need to be set when using the modules in your Ansible playbooks. Options listed with a default value of true indicate that the option is automatically set when using the modules and you do not need to specify it in your playbook. For example, for the cephadm_bootstrap module, the Ceph Dashboard is installed unless you set dashboard: false . Table 4.1. Available options for the cephadm_bootstrap module. cephadm_bootstrap Description Required Default mon_ip Ceph Monitor IP address. true image Ceph container image. false docker Use docker instead of podman . false fsid Define the Ceph FSID. false pull Pull the Ceph container image. false true dashboard Deploy the Ceph Dashboard. false true dashboard_user Specify a specific Ceph Dashboard user. false dashboard_password Ceph Dashboard password. false monitoring Deploy the monitoring stack. false true firewalld Manage firewall rules with firewalld. false true allow_overwrite Allow overwrite of existing --output-config, --output-keyring, or --output-pub-ssh-key files. false false registry_url URL for custom registry. false registry_username Username for custom registry. false registry_password Password for custom registry. false registry_json JSON file with custom registry login information. false ssh_user SSH user to use for cephadm ssh to hosts. false ssh_config SSH config file path for cephadm SSH client. false allow_fqdn_hostname Allow hostname that is a fully-qualified domain name (FQDN). false false cluster_network Subnet to use for cluster replication, recovery and heartbeats. false Table 4.2. Available options for the ceph_orch_host module. ceph_orch_host Description Required Default fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false name Name of the host to add, remove, or update. true address IP address of the host. true when state is present . set_admin_label Set the _admin label on the specified host. false false labels The list of labels to apply to the host. false [] state If set to present , it ensures the name specified in name is present. If set to absent , it removes the host specified in name . If set to drain , it schedules to remove all daemons from the host specified in name . false present Table 4.3. Available options for the ceph_config module ceph_config Description Required Default fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false action Whether to set or get the parameter specified in option . false set who Which daemon to set the configuration to. true option Name of the parameter to set or get . true value Value of the parameter to set. true if action is set Table 4.4. Available options for the ceph_orch_apply module. ceph_orch_apply Description Required fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false spec The service specification to apply. true Table 4.5. Available options for the ceph_orch_daemon module. ceph_orch_daemon Description Required fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false state The desired state of the service specified in name . true If started , it ensures the service is started. If stopped , it ensures the service is stopped. If restarted , it will restart the service. daemon_id The ID of the service. true daemon_type The type of service. true Table 4.6. Available options for the cephadm_registry_login module cephadm_registry_login Description Required Default state Login or logout of a registry. false login docker Use docker instead of podman . false registry_url The URL for custom registry. false registry_username Username for custom registry. true when state is login . registry_password Password for custom registry. true when state is login . registry_json The path to a JSON file. This file must be present on remote hosts prior to running this task. This option is currently not supported. 4.3. Bootstrapping a storage cluster using the cephadm_bootstrap and cephadm_registry_login modules As a storage administrator, you can bootstrap a storage cluster using Ansible by using the cephadm_bootstrap and cephadm_registry_login modules in your Ansible playbook. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Red Hat Enterprise Linux 9.0 or later with ansible-core bundled into AppStream. Installation of the cephadm-ansible package on the Ansible administration node. Passwordless SSH is set up on all hosts in the storage cluster. Hosts are registered with CDN. Note For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide . Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create the hosts file and add hosts, labels, and monitor IP address of the first host in the storage cluster: Syntax Example Run the preflight playbook: Syntax Example Create a playbook to bootstrap your cluster: Syntax Example Run the playbook: Syntax Example Verification Review the Ansible output after running the playbook. 4.4. Adding or removing hosts using the ceph_orch_host module As a storage administrator, you can add and remove hosts in your storage cluster by using the ceph_orch_host module in your Ansible playbook. Prerequisites A running Red Hat Ceph Storage cluster. Register the nodes to the CDN and attach subscriptions. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. New hosts have the storage cluster's public SSH key. For more information about copying the storage cluster's public SSH keys to new hosts, see Adding hosts . Procedure Use the following procedure to add new hosts to the cluster: Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Add the new hosts and labels to the Ansible inventory file. Syntax Example Note If you have previously added the new hosts to the Ansible inventory file and ran the preflight playbook on the hosts, skip to step 3. Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chrony , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. Create a playbook to add the new hosts to the cluster: Syntax Note By default, Ansible executes all tasks on the host that matches the hosts line of your playbook. The ceph orch commands must run on the host that contains the admin keyring and the Ceph configuration file. Use the delegate_to keyword to specify the admin host in your cluster. Example In this example, the playbook adds the new hosts to the cluster and displays a current list of hosts. Run the playbook to add additional hosts to the cluster: Syntax Example Use the following procedure to remove hosts from the cluster: Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook to remove a host or hosts from the cluster: Syntax Example In this example, the playbook tasks drain all daemons on host07 , removes the host from the cluster, and displays a current list of hosts. Run the playbook to remove host from the cluster: Syntax Example Verification Review the Ansible task output displaying the current list of hosts in the cluster: Example 4.5. Setting configuration options using the ceph_config module As a storage administrator, you can set or get Red Hat Ceph Storage configuration options using the ceph_config module. Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. For more information about adding hosts to your storage cluster, see Adding or removing hosts using the ceph_orch_host module . Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with configuration changes: Syntax Example In this example, the playbook first sets the mon_allow_pool_delete option to false . The playbook then gets the current mon_allow_pool_delete setting and displays the value in the Ansible output. Run the playbook: Syntax Example Verification Review the output from the playbook tasks. Example Additional Resources See the Red Hat Ceph Storage Configuration Guide for more details on configuration options. 4.6. Applying a service specification using the ceph_orch_apply module As a storage administrator, you can apply service specifications to your storage cluster using the ceph_orch_apply module in your Ansible playbooks. A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. You can use a service specification to deploy Ceph service types like mon , crash , mds , mgr , osd , rdb , or rbd-mirror . Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. For more information about adding hosts to your storage cluster, see Adding or removing hosts using the ceph_orch_host module . Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with the service specifications: Syntax Example In this example, the playbook deploys the Ceph OSD service on all hosts with the label osd . Run the playbook: Syntax Example Verification Review the output from the playbook tasks. Additional Resources See the Red Hat Ceph Storage Operations Guide for more details on service specification options. 4.7. Managing Ceph daemon states using the ceph_orch_daemon module As a storage administrator, you can start, stop, and restart Ceph daemons on hosts using the ceph_orch_daemon module in your Ansible playbooks. Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. For more information about adding hosts to your storage cluster, see Adding or removing hosts using the ceph_orch_host module . Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with daemon state changes: Syntax Example In this example, the playbook starts the OSD with an ID of 0 and stops a Ceph Monitor with an id of host02 . Run the playbook: Syntax Example Verification Review the output from the playbook tasks. | [
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR",
"[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml",
"TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml",
"TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :",
"[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE",
"[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/installation_guide/managing-a-red-hat-ceph-storage-cluster-using-cephadm-ansible-modules |
Chapter 18. Red Hat Software Collections | Chapter 18. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Red Hat Developer Toolset is included as a separate Software Collection. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Since Red Hat Software Collections 2.3, the Eclipse development platform is provided as a separate Software Collection. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/chap-red_hat_enterprise_linux-6.8_release_notes-red_hat_software_collections |
Chapter 2. Searching in the console | Chapter 2. Searching in the console For Red Hat Advanced Cluster Management for Kubernetes, search provides visibility into your Kubernetes resources across all of your clusters. Search also indexes the Kubernetes resources and the relationships to other resources. Search components Search customization and configurations Search operations and data types 2.1. Search components The search architecture is composed of the following components: Table 2.1. Search component table Component name Metrics Metric type Description search-collector Watches the Kubernetes resources, collects the resource metadata, computes relationships for resources across all of your managed clusters, and sends the collected data to the search-indexer . The search-collector on your managed cluster runs as a pod named, klusterlet-addon-search . search-indexer Receives resource metadata from the collectors and writes to PostgreSQL database. The search-indexer also watches resources in the hub cluster to keep track of active managed clusters. search_indexer_request_duration Histogram Time (seconds) the search indexer takes to process a request (from managed cluster). search_indexer_request_size Histogram Total changes (add, update, delete) in the search indexer request (from managed cluster). search_indexer_request_count Counter Total requests received by the search indexer (from managed clusters). search_indexer_requests_in_flight Gauge Total requests the search indexer is processing at a given time. search-api Provides access to all cluster data in the search-indexer through GraphQL and enforces role-based access control (RBAC). search_api_requests Histogram Histogram of HTTP requests duration in seconds. search_dbquery_duration_seconds Histogram Latency of database requests in seconds. search_api_db_connection_failed_total Counter The total number of database connection attempts that failed. search-postgres Stores collected data from all managed clusters in an instance of the PostgreSQL database. Search is configured by default on the hub cluster. When you provision or manually import a managed cluster, the klusterlet-addon-search is enabled. If you want to disable search on your managed cluster, see Modifying the klusterlet add-ons settings of your cluster for more information. 2.2. Search customization and configurations You can modify the default values in the search-v2-operator custom resource. To view details of the custom resource, run the following command: oc get search search-v2-operator -o yaml The search operator watches the search-v2-operator custom resource, reconciles the changes and updates active pods. View the following descriptions of the configurations: PostgreSQL database storage: When you install Red Hat Advanced Cluster Management, the PostgreSQL database is configured to save the PostgreSQL data in an empty directory ( emptyDir ) volume. If the empty directory size is limited, you can save the PostgreSQL data on a Persistent Volume Claim (PVC) to improve search performance. You can select a storageclass from your Red Hat Advanced Cluster Management hub cluster to back up your search data. For example, if you select the gp2 storageclass your configuration might resemble the following example: apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management labels: cluster.open-cluster-management.io/backup: "" spec: dbStorage: size: 10Gi storageClassName: gp2 This configuration creates a PVC named gp2-search and is mounted to the search-postgres pod. By default, the storage size is 10Gi . You can modify the storage size. For example, 20Gi might be sufficient for about 200 managed clusters. Optimize cost by tuning the pod memory or CPU requirements, replica count, and update log levels for any of the four search pods ( indexer , database , queryapi , or collector pod). Update the deployment section of the search-v2-operator custom resource. There are four deployments managed by the search-v2-operator , which can be updated individually. Your search-v2-operator custom resource might resemble the following file: apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management spec: deployments: collector: resources: 1 limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64Mi indexer: replicaCount: 3 database: 2 envVar: - name: POSTGRESQL_EFFECTIVE_CACHE_SIZE value: 1024MB - name: POSTGRESQL_SHARED_BUFFERS value: 512MB - name: WORK_MEM value: 128MB queryapi: arguments: 3 - -v=3 1 You can apply resources to an indexer , database , queryapi , or collector pod. 2 You can add multiple environment variables in the envVar section to specify a value for each variable that you name. 3 You can control the log level verbosity for any of the four pods by adding the - -v=3 argument. See the following example where memory resources are applied to the indexer pod: indexer: resources: limits: memory: 5Gi requests: memory: 1Gi Node placement for search pods: You can update the Placement of search pods by using the nodeSelector parameter, or the tolerations parameter. View the following example configuration: spec: dbStorage: size: 10Gi deployments: collector: {} database: {} indexer: {} queryapi: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists 2.3. Search operations and data types Specify your search query by using search operations as conditions. Characters such as >, >=, <, <=, != are supported. See the following search operation table: Table 2.2. Search operation table Default operation Data type Description = string, number This is the default operation. ! or != string, number This represents the NOT operation, which means to exclude from the search results. <, ⇐, >, >= number > date Dates matching the last hour, day, week, month, and year. * string Partial string match. 2.4. Additional resources For instruction about how to manage search, see Managing search . For more topics about the Red Hat Advanced Cluster Management for Kubernetes console, see Web console . 2.5. Managing search Use search to query resource data from your clusters. Required access: Cluster administrator Continue reading the following topics: Creating search configurable collection Customizing the search console Querying in the console Updating klusterlet-addon-search deployments on managed clusters 2.5.1. Creating search configurable collection To define which Kubernetes resources get collected from the cluster, create the search-collector-config config map. Complete the following steps: Run the following command to create the search-collector-config config map: oc apply -f <your-search-collector-config>.yaml List the resources in the allow ( data.AllowedResources ) and deny list ( data.DeniedResources ) sections within the config map. Your config map might resemble the following YAML file: apiVersion: v1 kind: ConfigMap metadata: name: search-collector-config namespace: <namespace where search-collector add-on is deployed> data: AllowedResources: |- 1 - apiGroups: - "*" resources: - services - pods - apiGroups: - admission.k8s.io - authentication.k8s.io resources: - "*" DeniedResources: |- 2 - apiGroups: - "*" resources: - secrets - apiGroups: - admission.k8s.io resources: - policies - iampolicies - certificatepolicies 1 The config map example displays services and pods to be collected from all apiGroups , while allowing all resources to be collected from the admission.k8s.io and authentication.k8s.io apiGroups . 2 The config map example also prevents the central collection of secrets from all apiGroups while preventing the collection of policies , iampolicies , and certificatepolicies from the apiGroup admission.k8s.io . Note: If you do not provide a config map, all resources are collected by default. If you only provide AllowedResources , all resources not listed in AllowedResources are automatically excluded. Resources listed in AllowedResources and DeniedResources at the same time are also excluded. 2.5.2. Customizing the search console Customize your search results and limits. Complete the following tasks to perform the customization: Customize the search result limit from the OpenShift Container Platform console. Update the console-mce-config in the multicluster-engine namespace. These settings apply to all users and might affect performance. View the following performance parameter descriptions: SAVED_SEARCH_LIMIT - The maximum amount of saved searches for each user. By default, there is a limit of ten saved searches for each user. The default value is 10 . To update the limit, add the following key value to the console-config config map: SAVED_SEARCH_LIMIT: x . SEARCH_RESULT_LIMIT - The maximum amount of search results displayed in the console. Default value is 1000 . To remove this limit set to -1 . SEARCH_AUTOCOMPLETE_LIMIT - The maximum number of suggestions retrieved for the search bar typeahead. Default value is 10,000 . To remove this limit set to -1 . Run the following patch command from the OpenShift Container Platform console to change the search result to 100 items: oc patch configmap console-mce-config -n multicluster-engine --type merge -p '{"data":{"SEARCH_RESULT_LIMIT":"100"}}' To add, edit, or remove suggested searches, create a config map named console-search-config and configure the suggestedSearches section. Suggested searches that are listed are also displayed from the console. It is required to have an id, name, and searchText for each search object. View the following config map example: kind: ConfigMap apiVersion: v1 metadata: name: console-search-config namespace: <acm-namespace> 1 data: suggestedSearches: |- [ { "id": "search.suggested.workloads.name", "name": "Workloads", "description": "Show workloads running on your fleet", "searchText": "kind:DaemonSet,Deployment,Job,StatefulSet,ReplicaSet" }, { "id": "search.suggested.unhealthy.name", "name": "Unhealthy pods", "description": "Show pods with unhealthy status", "searchText": "kind:Pod status:Pending,Error,Failed,Terminating,ImagePullBackOff,CrashLoopBackOff,RunContainerError,ContainerCreating" }, { "id": "search.suggested.createdLastHour.name", "name": "Created last hour", "description": "Show resources created within the last hour", "searchText": "created:hour" }, { "id": "search.suggested.virtualmachines.name", "name": "Virtual Machines", "description": "Show virtual machine resources", "searchText": "kind:VirtualMachine" } ] 1 Add the namespace where search is enabled. 2.5.3. Querying in the console You can type any text value in the Search box and results include anything with that value from any property, such as a name or namespace. Queries that contain an empty space are not supported. For more specific search results, include the property selector in your search. You can combine related values for the property for a more precise scope of your search. For example, search for cluster:dev red to receive results that match the string "red" in the dev cluster. Complete the following steps to make queries with search: Click Search in the navigation menu. Type a word in the Search box , then Search finds your resources that contain that value. As you search for resources, you receive other resources that are related to your original search result, which help you visualize how the resources interact with other resources in the system. Search returns and lists each cluster with the resource that you search. For resources in the hub cluster, the cluster name is displayed as local-cluster . Your search results are grouped by kind , and each resource kind is grouped in a table. Your search options depend on your cluster objects. You can refine your results with specific labels. Search is case-sensitive when you query labels. See the following examples that you can select for filtering: name , namespace , status , and other resource fields. Auto-complete provides suggestions to refine your search. See the following example: Search for a single field, such as kind:pod to find all pod resources. Search for multiple fields, such as kind:pod namespace:default to find the pods in the default namespace. Notes: When you search for more than one property selector with multiple values, the search returns either of the values that were queried. View the following examples: When you search for kind:Pod name:a , any pod named a is returned. When you search for kind:Pod name:a,b , any pod named a or b are returned. Search for kind:pod status:!Running to find all pod resources where the status is not Running . Search for kind:pod restarts:>1 to find all pods that restarted at least twice. If you want to save your search, click the Save search icon. To download your search results, select the Export as CSV button. 2.5.4. Updating klusterlet-addon-search deployments on managed clusters To collect the Kubernetes objects from the managed clusters, the klusterlet-addon-search pod is run on all the managed clusters where search is enabled. This deployment is run in the open-cluster-management-agent-addon namespace. A managed cluster with a high number of resources might require more memory for the klusterlet-addon-search deployment to function. Resource requirements for the klusterlet-addon-search pod in a managed cluster can be specified in the ManagedClusterAddon custom resource in your Red Hat Advanced Cluster Management hub cluster. There is a namespace for each managed cluster with the managed cluster name. Complete the following steps: Edit the ManagedClusterAddon custom resource from the namespace matching the managed cluster name. Run the following command to update the resource requirement in xyz managed cluster: oc edit managedclusteraddon search-collector -n xyz Append the resource requirements as annotations. View the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: annotations: addon.open-cluster-management.io/search_memory_limit: 2048Mi addon.open-cluster-management.io/search_memory_request: 512Mi The annotation overrides the resource requirements on the managed clusters and automatically restarts the pod with new resource requirements. Note: You can discover all resources defined in your managed cluster by using the API Explorer in the console. Alternatively, you can discover all resources by running the following command: oc api-resources 2.5.5. Additional resources See multicluster global hub for more details. See Observing environments introduction . | [
"get search search-v2-operator -o yaml",
"apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management labels: cluster.open-cluster-management.io/backup: \"\" spec: dbStorage: size: 10Gi storageClassName: gp2",
"apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management spec: deployments: collector: resources: 1 limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64Mi indexer: replicaCount: 3 database: 2 envVar: - name: POSTGRESQL_EFFECTIVE_CACHE_SIZE value: 1024MB - name: POSTGRESQL_SHARED_BUFFERS value: 512MB - name: WORK_MEM value: 128MB queryapi: arguments: 3 - -v=3",
"indexer: resources: limits: memory: 5Gi requests: memory: 1Gi",
"spec: dbStorage: size: 10Gi deployments: collector: {} database: {} indexer: {} queryapi: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists",
"apply -f <your-search-collector-config>.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: search-collector-config namespace: <namespace where search-collector add-on is deployed> data: AllowedResources: |- 1 - apiGroups: - \"*\" resources: - services - pods - apiGroups: - admission.k8s.io - authentication.k8s.io resources: - \"*\" DeniedResources: |- 2 - apiGroups: - \"*\" resources: - secrets - apiGroups: - admission.k8s.io resources: - policies - iampolicies - certificatepolicies",
"patch configmap console-mce-config -n multicluster-engine --type merge -p '{\"data\":{\"SEARCH_RESULT_LIMIT\":\"100\"}}'",
"kind: ConfigMap apiVersion: v1 metadata: name: console-search-config namespace: <acm-namespace> 1 data: suggestedSearches: |- [ { \"id\": \"search.suggested.workloads.name\", \"name\": \"Workloads\", \"description\": \"Show workloads running on your fleet\", \"searchText\": \"kind:DaemonSet,Deployment,Job,StatefulSet,ReplicaSet\" }, { \"id\": \"search.suggested.unhealthy.name\", \"name\": \"Unhealthy pods\", \"description\": \"Show pods with unhealthy status\", \"searchText\": \"kind:Pod status:Pending,Error,Failed,Terminating,ImagePullBackOff,CrashLoopBackOff,RunContainerError,ContainerCreating\" }, { \"id\": \"search.suggested.createdLastHour.name\", \"name\": \"Created last hour\", \"description\": \"Show resources created within the last hour\", \"searchText\": \"created:hour\" }, { \"id\": \"search.suggested.virtualmachines.name\", \"name\": \"Virtual Machines\", \"description\": \"Show virtual machine resources\", \"searchText\": \"kind:VirtualMachine\" } ]",
"edit managedclusteraddon search-collector -n xyz",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: annotations: addon.open-cluster-management.io/search_memory_limit: 2048Mi addon.open-cluster-management.io/search_memory_request: 512Mi"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/observability/searching-in-the-console-intro |
16.10. Deleting and Resurrecting Entries | 16.10. Deleting and Resurrecting Entries This section describes how enabling synchronization affects deleted entries on the sync peers and how resurrected entries are handled. 16.10.1. Deleting Entries All changes on an Active Directory peers are always synchronized back to the Directory Server. This means that when an Active Directory group or user account is deleted on the Active Directory domain, the deletion is automatically synchronized back to the Directory Server sync peer server. On Directory Server, on the other hand, when a Directory Server account is deleted, the corresponding entry on Active Directory is only deleted if the Directory Server entry has the ntUserDeleteAccount or ntGroupDeleteGroup attribute set to true . Note When a Directory Server entry is synchronized over to Active Directory for the first time, Active Directory automatically assigns it a unique ID. At the synchronization interval, the unique ID is synchronized back to the Directory Server entry and stored as the ntUniqueId attribute. If the Directory Server entry is deleted on Active Directory before the unique ID is synchronized back to Directory Server, the entry will not be deleted on Directory Server. Directory Server uses the ntUniqueId attribute to identify and synchronize changes made on Active Directory to the corresponding Directory Server entry; without that attribute, Directory Server will not recognize the deletion. To delete the entry on Active Directory and then synchronize the deletion over to Directory Server, wait the length of the winSyncInterval (by default, five minutes) after the entry is created before deleting it so that the ntUniqueId attribute is synchronized. 16.10.2. Resurrecting Entries It is possible to add deleted entries back in Directory Server; the deleted entries are called tombstone entries. When a deleted entry which was synchronized between Directory Server and Active Directory is re-added to Directory Server, the resurrected Directory Server entry has all of its original attributes and values. This is called tombstone reanimation . The resurrected entry includes the original ntUniqueId attribute which was used to synchronize the entries, which signals to the Active Directory server that this new entry is a tombstone entry. Active Directory resurrects the old entry and preserves the original unique ID for the entry. For Active Directory entries, when the tombstone entry is resurrected on Directory Server, all of the attributes of the original Directory Server are retained and are still included in the resurrected Active Directory entry. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/using_windows_sync-deleting_entries |
Chapter 3. Adding subscriptions to a subscription allocation for a disconnected Satellite Server | Chapter 3. Adding subscriptions to a subscription allocation for a disconnected Satellite Server Only users on a disconnected Satellite Server need to add subscriptions to a subscription allocation. If you are a disconnected user, you must complete this step before you can download the manifest and add it to the host system. Users on a connected Satellite Server skip this step. For information about managing a subscription manifest for a connected Satellite Server, see Creating and managing a manifest for a connected Satellite Server . Procedure To add subscriptions to a subscription allocation for a disconnected Satellite Server, complete the following steps: From the Subscription Allocations page, click the allocation to which you are adding subscriptions. Click the Subscriptions tab. Click Add Subscriptions . Enter the number of entitlements for each subscription you plan to add. Ensure that you are adding the correct number of entitlements for the system you are using. Click Submit . Note You can include future-dated subscriptions, or subscriptions that have a start date in the future, to an allocation. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_disconnected_satellite_server/subs_allocation_proc |
Chapter 2. cinder | Chapter 2. cinder The following chapter contains information about the configuration options in the cinder service. 2.1. cinder.conf This section contains options for the /etc/cinder/cinder.conf file. 2.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/cinder/cinder.conf file. . Configuration option = Default value Type Description acs5000_copy_interval = 5 integer value When volume copy task is going on,refresh volume status interval acs5000_volpool_name = ['pool01'] list value Comma separated list of storage system storage pools for volumes. allocated_capacity_weight_multiplier = -1.0 floating point value Multiplier used for weighing allocated capacity. Positive numbers mean to stack vs spread. allow_availability_zone_fallback = False boolean value If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing. allow_compression_on_image_upload = False boolean value The strategy to use for image compression on upload. Default is disallow compression. allowed_direct_url_schemes = [] list value A list of url schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file, cinder]. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service api_rate_limit = True boolean value Enables or disables rate limit of the API. as13000_ipsan_pools = ['Pool0'] list value The Storage Pools Cinder should use, a comma separated list. as13000_meta_pool = None string value The pool which is used as a meta pool when creating a volume, and it should be a replication pool at present. If not set, the driver will choose a replication pool from the value of as13000_ipsan_pools. as13000_token_available_time = 3300 integer value The effective time of token validity in seconds. auth_strategy = keystone string value The strategy to use for auth. Supports noauth or keystone. az_cache_duration = 3600 integer value Cache volume availability zones in memory for the provided duration in seconds backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. backend_availability_zone = None string value Availability zone for this volume backend. If not set, the storage_availability_zone option value is used as the default for all backends. backend_stats_polling_interval = 60 integer value Time in seconds between requests for usage statistics from the backend. Be aware that generating usage statistics is expensive for some backends, so setting this value too low may adversely affect performance. backup_api_class = cinder.backup.api.API string value The full class name of the volume backup API class backup_ceph_chunk_size = 134217728 integer value The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store. backup_ceph_conf = /etc/ceph/ceph.conf string value Ceph configuration file to use. backup_ceph_image_journals = False boolean value If True, apply JOURNALING and EXCLUSIVE_LOCK feature bits to the backup RBD objects to allow mirroring backup_ceph_pool = backups string value The Ceph pool where volume backups are stored. backup_ceph_stripe_count = 0 integer value RBD stripe count to use when creating a backup image. backup_ceph_stripe_unit = 0 integer value RBD stripe unit to use when creating a backup image. backup_ceph_user = cinder string value The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None. backup_compression_algorithm = zlib string value Compression algorithm ("none" to disable) backup_container = None string value Custom directory to use for backups. backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver string value Driver to use for backups. backup_driver_init_check_interval = 60 integer value Time in seconds between checks to see if the backup driver has been successfully initialized, any time the driver is restarted. backup_driver_stats_polling_interval = 60 integer value Time in seconds between checks of the backup driver status. If does not report as working, it is restarted. backup_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer. backup_file_size = 1999994880 integer value The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes. backup_manager = cinder.backup.manager.BackupManager string value Full class name for the Manager for volume backup backup_max_operations = 15 integer value Maximum number of concurrent memory heavy operations: backup and restore. Value of 0 means unlimited backup_metadata_version = 2 integer value Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version. backup_mount_attempts = 3 integer value The number of attempts to mount NFS shares before raising an error. backup_mount_options = None string value Mount options passed to the NFS client. See NFS man page for details. backup_mount_point_base = USDstate_path/backup_mount string value Base dir containing mount point for NFS share. backup_name_template = backup-%s string value Template string to be used to generate backup names backup_native_threads_pool_size = 60 integer value Size of the native threads pool for the backups. Most backup drivers rely heavily on this, it can be decreased for specific drivers that don't. backup_object_number_per_notification = 10 integer value The number of chunks or objects, for which one Ceilometer notification will be sent backup_posix_path = USDstate_path/backup string value Path specifying where to store backups. backup_s3_block_size = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_s3_object_size has to be multiple of backup_s3_block_size. backup_s3_ca_cert_file = None string value path/to/cert/bundle.pem - A filename of the CA cert bundle to use. backup_s3_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the S3 backend storage. The default value is True to enable the timer. backup_s3_endpoint_url = None string value The url where the S3 server is listening. `backup_s3_http_proxy = ` string value Address or host for the http proxy server. `backup_s3_https_proxy = ` string value Address or host for the https proxy server. backup_s3_max_pool_connections = 10 integer value The maximum number of connections to keep in a connection pool. backup_s3_md5_validation = True boolean value Enable or Disable md5 validation in the s3 backend. backup_s3_object_size = 52428800 integer value The size in bytes of S3 backup objects backup_s3_retry_max_attempts = 4 integer value An integer representing the maximum number of retry attempts that will be made on a single request. backup_s3_retry_mode = legacy string value A string representing the type of retry mode. e.g: legacy, standard, adaptive backup_s3_sse_customer_algorithm = None string value The SSECustomerAlgorithm. backup_s3_sse_customer_key must be set at the same time to enable SSE. backup_s3_sse_customer_key = None string value The SSECustomerKey. backup_s3_sse_customer_algorithm must be set at the same time to enable SSE. backup_s3_store_access_key = None string value The S3 query token access key. backup_s3_store_bucket = volumebackups string value The S3 bucket to be used to store the Cinder backup data. backup_s3_store_secret_key = None string value The S3 query token secret key. backup_s3_timeout = 60 floating point value The time in seconds till a timeout exception is thrown. backup_s3_verify_ssl = True boolean value Enable or Disable ssl verify. backup_service_inithost_offload = True boolean value Offload pending backup delete during backup service startup. If false, the backup service will remain down until all pending backups are deleted. backup_sha_block_size_bytes = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes. backup_share = None string value NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format. backup_swift_auth = per_user string value Swift authentication mechanism (per_user or single_user). backup_swift_auth_insecure = False boolean value Bypass verification of server certificate when making SSL connection to Swift. backup_swift_auth_url = None uri value The URL of the Keystone endpoint backup_swift_auth_version = 1 string value Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0 or "3" for auth 3.0 backup_swift_block_size = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size. backup_swift_ca_cert_file = None string value Location of the CA certificate file to use for swift client requests. backup_swift_container = volumebackups string value The default Swift container to use backup_swift_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer. backup_swift_key = None string value Swift key for authentication backup_swift_object_size = 52428800 integer value The size in bytes of Swift backup objects backup_swift_project = None string value Swift project/account name. Required when connecting to an auth 3.0 system backup_swift_project_domain = None string value Swift project domain name. Required when connecting to an auth 3.0 system backup_swift_retry_attempts = 3 integer value The number of retries to make for Swift operations backup_swift_retry_backoff = 2 integer value The backoff time in seconds between Swift retries backup_swift_tenant = None string value Swift tenant/account name. Required when connecting to an auth 2.0 system backup_swift_url = None uri value The URL of the Swift endpoint backup_swift_user = None string value Swift user name backup_swift_user_domain = None string value Swift user domain name. Required when connecting to an auth 3.0 system backup_timer_interval = 120 integer value Interval, in seconds, between two progress notifications reporting the backup status backup_use_same_host = False boolean value Backup services use same backend. backup_use_temp_snapshot = False boolean value If this is set to True, a temporary snapshot will be created for performing non-disruptive backups. Otherwise a temporary volume will be cloned in order to perform a backup. backup_workers = 1 integer value Number of backup processes to launch. Improves performance with concurrent backups. capacity_weight_multiplier = 1.0 floating point value Multiplier used for weighing free capacity. Negative numbers mean to stack vs spread. `chap_password = ` string value Password for specified CHAP account name. chap_password_len = 12 integer value Length of the random string for CHAP password. `chap_username = ` string value CHAP user name. chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf string value Chiscsi (CXT) global defaults configuration file cinder_internal_tenant_project_id = None string value ID of the project which will be used as the Cinder internal tenant. cinder_internal_tenant_user_id = None string value ID of the user to be used in volume operations as the Cinder internal tenant. client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. clone_volume_timeout = 680 integer value Create clone volume timeout Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. cloned_volume_same_az = True boolean value Ensure that the new volumes are the same AZ as snapshot or source volume cluster = None string value Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. compression_format = gzip string value Image compression format on image upload compute_api_class = cinder.compute.nova.API string value The full class name of the compute API class to use config-dir = ['~/.project/project.conf.d/', '~/project.conf.d/', '/etc/project/project.conf.d/', '/etc/project.conf.d/'] list value Path to a config directory to pull *.conf files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s) specified via --config-file, arguments hence over-ridden options in the directory take precedence. This option must be set from the command-line. config-file = ['~/.project/project.conf', '~/project.conf', '/etc/project/project.conf', '/etc/project.conf'] unknown value Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. Defaults to %(default)s. This option must be set from the command-line. config_source = [] list value Lists configuration groups that provide more details for accessing configuration settings from locations other than local files. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consistencygroup_api_class = cinder.consistencygroup.api.API string value The full class name of the consistencygroup API class control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. datera_503_interval = 5 integer value Interval between 503 retries datera_503_timeout = 120 integer value Timeout for HTTP 503 retry messages datera_api_port = 7717 string value Datera API port. datera_api_version = 2.2 string value Datera API version. datera_debug = False boolean value True to set function arg and return logging datera_debug_replica_count_override = False boolean value ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1 datera_disable_extended_metadata = False boolean value Set to True to disable sending additional metadata to the Datera backend datera_disable_profiler = False boolean value Set to True to disable profiling in the Datera driver datera_disable_template_override = False boolean value Set to True to disable automatic template override of the size attribute when creating from a template datera_enable_image_cache = False boolean value Set to True to enable Datera backend image caching datera_image_cache_volume_type_id = None string value Cinder volume type id to use for cached volumes datera_ldap_server = None string value LDAP authentication server datera_tenant_id = None string value If set to Map --> OpenStack project ID will be mapped implicitly to Datera tenant ID If set to None --> Datera tenant ID will not be used during volume provisioning If set to anything else --> Datera tenant ID will be the provided value datera_volume_type_defaults = {} dict value Settings here will be used as volume-type defaults if the volume-type setting is not provided. This can be used, for example, to set a very low total_iops_max value if none is specified in the volume-type to prevent accidental overusage. Options are specified via the following format, WITHOUT ANY DF: PREFIX: datera_volume_type_defaults=iops_per_gb:100,bandwidth_per_gb:200... etc . db_driver = cinder.db string value Driver to use for database access debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_availability_zone = None string value Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes. default_group_type = None string value Default group type to use default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_sandstone_target_ips = [] list value SandStone default target ip. default_volume_type = __DEFAULT__ string value Default volume type to use driver_client_cert = None string value The path to the client certificate for verification, if the driver supports it. driver_client_cert_key = None string value The path to the client certificate key for verification, if the driver supports it. driver_data_namespace = None string value Namespace for driver private data values to be saved in. driver_ssl_cert_path = None string value Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend driver_ssl_cert_verify = False boolean value If set to True the http client will validate the SSL certificate of the backend endpoint. driver_use_ssl = False boolean value Tell driver to use SSL for connection to backend storage if the driver supports it. dsware_isthin = False boolean value The flag of thin storage allocation. Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. `dsware_manager = ` string value Fusionstorage manager ip addr for cinder-volume. Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. `dsware_rest_url = ` string value The address of FusionStorage array. For example, "dsware_rest_url=xxx" `dsware_storage_pools = ` string value The list of pools on the FusionStorage array, the semicolon(;) was used to split the storage pools, "dsware_storage_pools = xxx1; xxx2; xxx3" enable_force_upload = False boolean value Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it. enable_new_services = True boolean value Services to be added to the available pool on create enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. enable_v2_api = True boolean value DEPRECATED: Deploy v2 of the Cinder API. enable_v3_api = True boolean value Deploy v3 of the Cinder API. enabled_backends = None list value A list of backend names to use. These backend names should be backed by a unique [CONFIG] group with its options enforce_multipath_for_image_xfer = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. filter_function = None string value String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. `fusionstorageagent = ` string value Fusionstorage agent ip addr range Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. glance_api_insecure = False boolean value Allow to perform insecure SSL (https) requests to glance (https will be used but cert validation will not be performed). glance_api_servers = None list value A list of the URLs of glance API servers available to cinder ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to http. glance_api_ssl_compression = False boolean value Enables or disables negotiation of SSL layer compression. In some cases disabling compression can improve data throughput, such as when high network bandwidth is available and you use compressed image formats like qcow2. glance_ca_certificates_file = None string value Location of ca certificates file to use for glance client requests. glance_catalog_info = image:glance:publicURL string value Info to match when looking for glance in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if glance_api_servers are not provided. glance_certfile = None string value Location of certificate file to use for glance client requests. glance_core_properties = ['checksum', 'container_format', 'disk_format', 'image_name', 'image_id', 'min_disk', 'min_ram', 'name', 'size'] list value Default core properties of image glance_keyfile = None string value Location of certificate key file to use for glance client requests. glance_num_retries = 3 integer value Number retries when downloading an image from glance glance_request_timeout = None integer value http/https timeout value for glance operations. If no value (None) is supplied here, the glanceclient default value is used. glusterfs_backup_mount_point = USDstate_path/backup_mount string value Base dir containing mount point for gluster share. glusterfs_backup_share = None string value GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol goodness_function = None string value String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. group_api_class = cinder.group.api.API string value The full class name of the group API class host = <based on operating system> string value Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address. iet_conf = /etc/iet/ietd.conf string value DEPRECATED: IET configuration file image_compress_on_upload = True boolean value When possible, compress images uploaded to the image service image_conversion_address_space_limit = 1 integer value Address space limit in gigabytes to convert the image image_conversion_cpu_limit = 60 integer value CPU time limit in seconds to convert the image image_conversion_dir = USDstate_path/conversion string value Directory used for temporary storage during image conversion image_upload_use_cinder_backend = False boolean value If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service. image_upload_use_internal_tenant = False boolean value If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant. image_volume_cache_enabled = False boolean value Enable the image volume cache for this backend. image_volume_cache_max_count = 0 integer value Max number of entries allowed in the image volume cache. 0 ⇒ unlimited. image_volume_cache_max_size_gb = 0 integer value Max size of the image volume cache for this backend in GB. 0 ⇒ unlimited. infortrend_cli_cache = False boolean value The Infortrend CLI cache. While set True, the RAID status report will use cache stored in the CLI. Never enable this unless the RAID is managed only by Openstack and only by one infortrend cinder-volume backend. Otherwise, CLI might report out-dated status to cinder and thus there might be some race condition among all backend/CLIs. infortrend_cli_max_retries = 5 integer value The maximum retry times if a command fails. infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar string value The Infortrend CLI absolute path. infortrend_cli_timeout = 60 integer value The timeout for CLI in seconds. infortrend_iqn_prefix = iqn.2002-10.com.infortrend string value Infortrend iqn prefix for iSCSI. `infortrend_pools_name = ` list value The Infortrend logical volumes name list. It is separated with comma. `infortrend_slots_a_channels_id = ` list value Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. `infortrend_slots_b_channels_id = ` list value Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. init_host_max_objects_retrieval = 0 integer value Max number of volumes and snapshots to be retrieved per batch during volume manager host initialization. Query results will be obtained in batches from the database and not in one shot to avoid extreme memory usage. Set 0 to turn off this functionality. initiator_assign_sandstone_target_ip = {} dict value Support initiator assign target with assign ip. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. instorage_mcs_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create instorage_mcs_iscsi_chap_enabled = True boolean value Configure CHAP authentication for iSCSI connections (Default: Enabled) instorage_mcs_localcopy_rate = 50 integer value Specifies the InStorage LocalCopy copy rate to be used when creating a full volume copy. The default rate is 50, and the valid rates are 1-100. instorage_mcs_localcopy_timeout = 120 integer value Maximum number of seconds to wait for LocalCopy to be prepared. instorage_mcs_vol_autoexpand = True boolean value Storage system autoexpand parameter for volumes (True/False) instorage_mcs_vol_compression = False boolean value Storage system compression option for volumes instorage_mcs_vol_grainsize = 256 integer value Storage system grain size parameter for volumes (32/64/128/256) instorage_mcs_vol_intier = True boolean value Enable InTier for volumes instorage_mcs_vol_iogrp = 0 string value The I/O group in which to allocate volumes. It can be a comma-separated list in which case the driver will select an io_group based on least number of volumes associated with the io_group. instorage_mcs_vol_rsize = 2 integer value Storage system space-efficiency parameter for volumes (percentage) instorage_mcs_vol_warning = 0 integer value Storage system threshold for volume capacity warnings (percentage) instorage_mcs_volpool_name = ['volpool'] list value Comma separated list of storage system storage pools for volumes. instorage_san_secondary_ip = None string value Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. iscsi_iotype = fileio string value Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device iscsi_secondary_ip_addresses = [] list value The list of secondary IP addresses of the iSCSI daemon `iscsi_target_flags = ` string value Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. iscsi_write_cache = on string value Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if target_helper is set to tgtadm. iser_helper = tgtadm string value The name of the iSER target user-land tool to use iser_ip_address = USDmy_ip string value The IP address that the iSER daemon is listening on iser_port = 3260 port value The port that the iSER daemon is listening on iser_target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSER volumes java_path = /usr/bin/java string value The Java absolute path. jovian_block_size = 64K string value Block size can be: 32K, 64K, 128K, 256K, 512K, 1M jovian_ignore_tpath = [] list value List of multipath ip addresses to ignore. jovian_pool = Pool-0 string value JovianDSS pool that holds all cinder volumes jovian_recovery_delay = 60 integer value Time before HA cluster failure. keystone_catalog_info = identity:Identity Service:publicURL string value Info to match when looking for keystone in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_auth_url is unset kioxia_block_size = 4096 integer value Volume block size in bytes - 512 or 4096 (Default). kioxia_cafile = None string value Cert for provisioner REST API SSL kioxia_desired_bw_per_gb = 0 integer value Desired bandwidth in B/s per GB. kioxia_desired_iops_per_gb = 0 integer value Desired IOPS/GB. kioxia_max_bw_per_gb = 0 integer value Upper limit for bandwidth in B/s per GB. kioxia_max_iops_per_gb = 0 integer value Upper limit for IOPS/GB. kioxia_max_replica_down_time = 0 integer value Replicated volume max downtime for replica in minutes. kioxia_num_replicas = 1 integer value Number of volume replicas. kioxia_provisioning_type = THICK string value Thin or thick volume, Default thick. kioxia_same_rack_allowed = False boolean value Can more than one replica be allocated to same rack. kioxia_snap_reserved_space_percentage = 0 integer value Percentage of the parent volume to be used for log. kioxia_snap_vol_reserved_space_percentage = 0 integer value Writable snapshot percentage of parent volume used for log. kioxia_snap_vol_span_allowed = True boolean value Allow span in snapshot volume - Default True. kioxia_span_allowed = True boolean value Allow span - Default True. kioxia_token = None string value KumoScale Provisioner auth token. kioxia_url = None string value KumoScale provisioner REST API URL kioxia_vol_reserved_space_percentage = 0 integer value Thin volume reserved capacity allocation percentage. kioxia_writable = False boolean value Volumes from snapshot writeable or not. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter manager_ips = {} dict value This option is to support the FSA to mount across the different nodes. The parameters takes the standard dict config form, manager_ips = host1:ip1, host2:ip2... max_age = 0 integer value Number of seconds between subsequent usage refreshes max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_over_subscription_ratio = 20.0 string value Representation of the over subscription ratio when thin provisioning is enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. If ratio is auto , Cinder will automatically calculate the ratio based on the provisioned capacity and the used space. If not set to auto, the ratio has to be a minimum of 1.0. message_reap_interval = 86400 integer value interval between periodic task runs to clean expired messages in seconds. message_ttl = 2592000 integer value message minimum life in seconds. migration_create_volume_timeout_secs = 300 integer value Timeout for creating the volume to migrate to when performing volume migration (seconds) monkey_patch = False boolean value Enable monkey patching monkey_patch_modules = [] list value List of modules/decorators to monkey patch my_ip = <based on operating system> host address value IP address of this host no_snapshot_gb_quota = False boolean value Whether snapshots count against gigabyte quota num_iser_scan_tries = 3 integer value The maximum number of times to rescan iSER target to find volume num_shell_tries = 3 integer value Number of times to attempt to run flakey shell commands num_volume_device_scan_tries = 3 integer value The maximum number of times to rescan targets to find volume nvmet_ns_id = 10 integer value The namespace id associated with the subsystem that will be created with the path for the LVM volume. nvmet_port_id = 1 port value The port that the NVMe target is listening on. osapi_max_limit = 1000 integer value The maximum number of items that a collection resource returns in a single response osapi_volume_ext_list = [] list value Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions osapi_volume_extension = ['cinder.api.contrib.standard_extensions'] multi valued osapi volume extension to load osapi_volume_listen = 0.0.0.0 string value IP address on which OpenStack Volume API listens osapi_volume_listen_port = 8776 port value Port on which OpenStack Volume API listens osapi_volume_use_ssl = False boolean value Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified. osapi_volume_workers = None integer value Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available. per_volume_size_limit = -1 integer value Max size allowed per volume, in gigabytes periodic_fuzzy_delay = 60 integer value Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 60 integer value Interval, in seconds, between running periodic tasks pool_id_filter = [] list value Pool id permit to use Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. pool_type = default string value Pool type, like sata-2copy Deprecated since: 14.0.0 *Reason:*FusionStorage cinder driver refactored the code with Restful method and the old CLI mode has been abandon. So those configuration items are no longer used. public_endpoint = None string value Public url to use for versions endpoint. The default is None, which will use the request's host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy's URL. publish_errors = False boolean value Enables or disables publication of error events. quota_backup_gigabytes = 1000 integer value Total amount of storage, in gigabytes, allowed for backups per project quota_backups = 10 integer value Number of volume backups allowed per project quota_consistencygroups = 10 integer value Number of consistencygroups allowed per project quota_driver = cinder.quota.DbQuotaDriver string value Default driver to use for quota checks quota_gigabytes = 1000 integer value Total amount of storage, in gigabytes, allowed for volumes and snapshots per project quota_groups = 10 integer value Number of groups allowed per project quota_snapshots = 10 integer value Number of volume snapshots allowed per project quota_volumes = 10 integer value Number of volumes allowed per project rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. reinit_driver_count = 3 integer value Maximum times to reintialize the driver if volume initialization fails. The interval of retry is exponentially backoff, and will be 1s, 2s, 4s etc. replication_device = None dict value Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2... report_discard_supported = False boolean value Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. report_interval = 10 integer value Interval, in seconds, between nodes reporting state to datastore reservation_clean_interval = USDreservation_expire integer value Interval between periodic task runs to clean expired reservations in seconds. reservation_expire = 86400 integer value Number of seconds until a reservation expires reserved_percentage = 0 integer value The percentage of backend capacity is reserved resource_query_filters_file = /etc/cinder/resource_filters.json string value Json file indicating user visible filter parameters for list queries. restore_discard_excess_bytes = True boolean value If True, always discard excess bytes when restoring volumes i.e. pad with zeroes. rootwrap_config = /etc/cinder/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? `san_hosts = ` list value IP address of Open-E JovianDSS SA `sandstone_pool = ` string value SandStone storage pool resource name. scheduler_default_filters = ['AvailabilityZoneFilter', 'CapacityFilter', 'CapabilitiesFilter'] list value Which filter class names to use for filtering hosts when not specified in the request. scheduler_default_weighers = ['CapacityWeigher'] list value Which weigher class names to use for weighing hosts. scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler string value Default scheduler driver to use scheduler_driver_init_wait_time = 60 integer value Maximum time in seconds to wait for the driver to report as ready scheduler_host_manager = cinder.scheduler.host_manager.HostManager string value The scheduler host manager class to use `scheduler_json_config_location = ` string value Absolute path to scheduler configuration JSON file. scheduler_manager = cinder.scheduler.manager.SchedulerManager string value Full class name for the Manager for scheduler scheduler_max_attempts = 3 integer value Maximum number of attempts to schedule a volume scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler string value Which handler to use for selecting the host/pool after weighing scst_target_driver = iscsi string value SCST target implementation can choose from multiple SCST target drivers. scst_target_iqn_name = None string value Certain ISCSI targets have predefined target names, SCST target driver uses this name. service_down_time = 60 integer value Maximum time since last check-in for a service to be considered up snapshot_name_template = snapshot-%s string value Template string to be used to generate snapshot names snapshot_same_host = True boolean value Create volume from snapshot at the host where snapshot resides split_loggers = False boolean value Log requests to multiple loggers. ssh_hosts_key_file = USDstate_path/ssh_known_hosts string value File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=USDstate_path/ssh_known_hosts state_path = /var/lib/cinder string value Top-level directory for maintaining cinder's state storage_availability_zone = nova string value Availability zone of this node. Can be overridden per volume backend with the option "backend_availability_zone". storage_protocol = iscsi string value Protocol for transferring data between host and storage back-end. strict_ssh_host_key_policy = False boolean value Option to enable strict host key checking. When set to "True" Cinder will only connect to systems with a host key present in the configured "ssh_hosts_key_file". When set to "False" the host key will be saved upon first connection and used for subsequent connections. Default=False swift_catalog_info = object-store:swift:publicURL string value Info to match when looking for swift in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_url is unset syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. target_helper = tgtadm string value Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, spdk-nvmeof for SPDK NVMe-oF, or fake for testing. Note: The IET driver is deprecated and will be removed in the V release. target_ip_address = USDmy_ip string value The IP address that the iSCSI daemon is listening on target_port = 3260 port value The port that the iSCSI daemon is listening on target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSCSI volumes target_protocol = iscsi string value Determines the target protocol for new volumes, created with tgtadm, lioadm and nvmet target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser", in case of nvmet target set to "nvmet_rdma". tcp_keepalive = True boolean value Sets the value of TCP_KEEPALIVE (True/False) for each server socket. tcp_keepalive_count = None integer value Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X. tcp_keepalive_interval = None integer value Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. trace_flags = None list value List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. transfer_api_class = cinder.transfer.api.API string value The full class name of the volume transfer API class transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html until_refresh = 0 integer value Count of reservations until usage is refreshed use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_chap_auth = False boolean value Option to enable/disable CHAP authentication for targets. use_default_quota_class = True boolean value Enables or disables use of default quota class with default quota. use_eventlog = False boolean value Log output to Windows Event Log. use_forwarded_for = False boolean value Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy. use_multipath_for_image_xfer = False boolean value Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. verify_glance_signatures = enabled string value Enable image signature verification. Cinder uses the image signature metadata from Glance and verifies the signature of a signed image while downloading that image. There are two options here. enabled : verify when image has signature metadata. disabled : verification is turned off. If the image signature cannot be verified or if the image signature metadata is incomplete when required, then Cinder will not create the volume and update it into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create volumes. vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] list value A list of strings describing the VMDK createType subformats that are allowed. We recommend that you only include single-file-with-sparse-header variants to avoid potential host file exposure when processing named extents when an image is converted to raw format as it is written to a volume. If this list is empty, no VMDK images are allowed. volume_api_class = cinder.volume.api.API string value The full class name of the volume API class to use volume_backend_name = None string value The backend name for a given driver implementation volume_clear = zero string value Method used to wipe old volumes volume_clear_ionice = None string value The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority. volume_clear_size = 0 integer value Size in MiB to wipe at start of old volumes. 1024 MiB at max. 0 ⇒ all volume_copy_blkio_cgroup_name = cinder-volume-copy string value The blkio cgroup name to be used to limit bandwidth of volume copy volume_copy_bps_limit = 0 integer value The upper limit of bandwidth of volume copy. 0 ⇒ unlimited volume_dd_blocksize = 1M string value The default block size used when copying/clearing volumes volume_manager = cinder.volume.manager.VolumeManager string value Full class name for the Manager for volume volume_name_template = volume-%s string value Template string to be used to generate volume names volume_number_multiplier = -1.0 floating point value Multiplier used for weighing volume number. Negative numbers mean to spread vs stack. volume_service_inithost_offload = False boolean value Offload pending volume delete during volume service startup volume_transfer_key_length = 16 integer value The number of characters in the autogenerated auth key. volume_transfer_salt_length = 8 integer value The number of characters in the salt. volume_usage_audit_period = month string value Time period for which to generate volume usages. The options are hour, day, month, or year. volumes_dir = USDstate_path/volumes string value Volume configuration file storage directory vrts_lun_sparse = True boolean value Create sparse Lun. vrts_target_config = /etc/cinder/vrts_target.xml string value VA config file. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. wsgi_server_debug = False boolean value True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies. zoning_mode = None string value FC Zoning mode configured, only fabric is supported now. 2.1.2. backend The following table outlines the options available under the [backend] group in the /etc/cinder/cinder.conf file. Table 2.1. backend Configuration option = Default value Type Description backend_host = None string value Backend override of host value. 2.1.3. backend_defaults The following table outlines the options available under the [backend_defaults] group in the /etc/cinder/cinder.conf file. Table 2.2. backend_defaults Configuration option = Default value Type Description auto_calc_max_oversubscription_ratio = False boolean value K2 driver will calculate max_oversubscription_ratio on setting this option as True. backend_availability_zone = None string value Availability zone for this volume backend. If not set, the storage_availability_zone option value is used as the default for all backends. backend_native_threads_pool_size = 20 integer value Size of the native threads pool for the backend. Increase for backends that heavily rely on this, like the RBD driver. chap = disabled string value CHAP authentication mode, effective only for iscsi (disabled|enabled) `chap_password = ` string value Password for specified CHAP account name. `chap_username = ` string value CHAP user name. check_max_pool_luns_threshold = False boolean value DEPRECATED: Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False. chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf string value Chiscsi (CXT) global defaults configuration file cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml string value Config file for cinder eternus_dx volume driver. cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml string value The configuration file for the Cinder Huawei driver. connection_type = iscsi string value Connection type to the IBM Storage Array cycle_period_seconds = 300 integer value This defines an optional cycle period that applies to Global Mirror relationships with a cycling mode of multi. A Global Mirror relationship using the multi cycling_mode performs a complete cycle at most once each period. The default is 300 seconds, and the valid seconds are 60-86400. datacore_api_timeout = 300 integer value Seconds to wait for a response from a DataCore API call. datacore_disk_failed_delay = 300 integer value Seconds to wait for DataCore virtual disk to come out of the "Failed" state. datacore_disk_pools = [] list value List of DataCore disk pools that can be used by volume driver. datacore_disk_type = single string value DataCore virtual disk type (single/mirrored). Mirrored virtual disks require two storage servers in the server group. datacore_fc_unallowed_targets = [] list value List of FC targets that cannot be used to attach volume. To prevent the DataCore FibreChannel volume driver from using some front-end targets in volume attachment, specify this option and list the iqn and target machine for each target as the value, such as <wwpns:target name>, <wwpns:target name>, <wwpns:target name>. datacore_iscsi_chap_storage = USDstate_path/.datacore_chap string value Fully qualified file name where dynamically generated iSCSI CHAP secrets are stored. datacore_iscsi_unallowed_targets = [] list value List of iSCSI targets that cannot be used to attach volume. To prevent the DataCore iSCSI volume driver from using some front-end targets in volume attachment, specify this option and list the iqn and target machine for each target as the value, such as <iqn:target name>, <iqn:target name>, <iqn:target name>. datacore_storage_profile = None string value DataCore virtual disk storage profile. default_timeout = 31536000 integer value Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long. deferred_deletion_delay = 0 integer value Time delay in seconds before a volume is eligible for permanent removal after being tagged for deferred deletion. deferred_deletion_purge_interval = 60 integer value Number of seconds between runs of the periodic task to purge volumes tagged for deletion. dell_api_async_rest_timeout = 15 integer value Dell SC API async call default timeout in seconds. dell_api_sync_rest_timeout = 30 integer value Dell SC API sync call default timeout in seconds. dell_sc_api_port = 3033 port value Dell API port dell_sc_server_folder = openstack string value Name of the server folder to use on the Storage Center dell_sc_ssn = 64702 integer value Storage Center System Serial Number dell_sc_verify_cert = False boolean value Enable HTTPS SC certificate verification dell_sc_volume_folder = openstack string value Name of the volume folder to use on the Storage Center dell_server_os = Red Hat Linux 6.x string value Server OS type to use when creating a new server on the Storage Center. destroy_empty_storage_group = False boolean value To destroy storage group when the last LUN is removed from it. By default, the value is False. disable_discovery = False boolean value Disabling iSCSI discovery (sendtargets) for multipath connections on K2 driver. `dpl_pool = ` string value DPL pool uuid in which DPL volumes are stored. dpl_port = 8357 port value DPL port number. driver_client_cert = None string value The path to the client certificate for verification, if the driver supports it. driver_client_cert_key = None string value The path to the client certificate key for verification, if the driver supports it. driver_data_namespace = None string value Namespace for driver private data values to be saved in. driver_ssl_cert_path = None string value Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend driver_ssl_cert_verify = False boolean value If set to True the http client will validate the SSL certificate of the backend endpoint. driver_use_ssl = False boolean value Tell driver to use SSL for connection to backend storage if the driver supports it. `ds8k_devadd_unitadd_mapping = ` string value Mapping between IODevice address and unit address. ds8k_host_type = auto string value Set to zLinux if your OpenStack version is prior to Liberty and you're connecting to zLinux systems. Otherwise set to auto. Valid values for this parameter are: auto , AMDLinuxRHEL , AMDLinuxSuse , AppleOSX , Fujitsu , Hp , HpTru64 , HpVms , LinuxDT , LinuxRF , LinuxRHEL , LinuxSuse , Novell , SGI , SVC , SanFsAIX , SanFsLinux , Sun , VMWare , Win2000 , Win2003 , Win2008 , Win2012 , iLinux , nSeries , pLinux , pSeries , pSeriesPowerswap , zLinux , iSeries . ds8k_ssid_prefix = FF string value Set the first two digits of SSID. enable_deferred_deletion = False boolean value Enable deferred deletion. Upon deletion, volumes are tagged for deletion but will only be removed asynchronously at a later time. enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. enforce_multipath_for_image_xfer = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. excluded_domain_ip = None IP address value DEPRECATED: Fault Domain IP to be excluded from iSCSI returns. Deprecated since: Stein *Reason:*Replaced by excluded_domain_ips option excluded_domain_ips = [] list value Comma separated Fault Domain IPs to be excluded from iSCSI returns. expiry_thres_minutes = 720 integer value This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. extra_capabilities = {} string value User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties. filter_function = None string value String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. flashsystem_connection_protocol = FC string value Connection protocol should be FC. (Default is FC.) flashsystem_iscsi_portid = 0 integer value Default iSCSI Port ID of FlashSystem. (Default port is 0.) flashsystem_multihostmap_enabled = True boolean value Allows vdisk to multi host mapping. (Default is True) force_delete_lun_in_storagegroup = True boolean value Delete a LUN even if it is in Storage Groups. goodness_function = None string value String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. gpfs_hosts = [] list value Comma-separated list of IP address or hostnames of GPFS nodes. gpfs_hosts_key_file = USDstate_path/ssh_known_hosts string value File containing SSH host keys for the gpfs nodes with which driver needs to communicate. Default=USDstate_path/ssh_known_hosts gpfs_images_dir = None string value Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. gpfs_images_share_mode = None string value Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. gpfs_max_clone_depth = 0 integer value Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. gpfs_mount_point_base = None string value Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. `gpfs_private_key = ` string value Filename of private key to use for SSH authentication. gpfs_sparse_volumes = True boolean value Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. gpfs_ssh_port = 22 port value SSH port to use. gpfs_storage_pool = system string value Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. gpfs_strict_host_key_policy = False boolean value Option to enable strict gpfs host key checking while connecting to gpfs nodes. Default=False gpfs_user_login = root string value Username for GPFS nodes. `gpfs_user_password = ` string value Password for GPFS node user. hitachi_compute_target_ports = [] list value IDs of the storage ports used to attach volumes to compute nodes. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). hitachi_discard_zero_page = True boolean value Enable or disable zero page reclamation in a DP-VOL. hitachi_group_create = False boolean value If True, the driver will create host groups or iSCSI targets on storage ports as needed. hitachi_group_delete = False boolean value If True, the driver will delete host groups or iSCSI targets on storage ports as needed. hitachi_ldev_range = None string value Range of the LDEV numbers in the format of xxxx-yyyy that can be used by the driver. Values can be in decimal format (e.g. 1000) or in colon-separated hexadecimal format (e.g. 00:03:E8). hitachi_pool = None string value Pool number or pool name of the DP pool. hitachi_rest_tcp_keepalive = True boolean value Enables or disables use of REST API tcp keepalive hitachi_snap_pool = None string value Pool number or pool name of the snapshot pool. hitachi_storage_id = None string value Product number of the storage system. hitachi_target_ports = [] list value IDs of the storage ports used to attach volumes to the controller node. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). hitachi_zoning_request = False boolean value If True, the driver will configure FC zoning between the server and the storage system provided that FC zoning manager is enabled. `hpe3par_api_url = ` string value WSAPI Server URL. This setting applies to both 3PAR and Primera. Example 1: for 3PAR, URL is: https://<3par ip>:8080/api/v1 Example 2: for Primera, URL is: https://<primera ip>:443/api/v1 hpe3par_cpg = ['OpenStack'] list value List of the 3PAR / Primera CPG(s) to use for volume creation `hpe3par_cpg_snap = ` string value The 3PAR / Primera CPG to use for snapshots of volumes. If empty the userCPG will be used. hpe3par_debug = False boolean value Enable HTTP debugging to 3PAR / Primera hpe3par_iscsi_chap_enabled = False boolean value Enable CHAP authentication for iSCSI connections. hpe3par_iscsi_ips = [] list value List of target iSCSI addresses to use. `hpe3par_password = ` string value 3PAR / Primera password for the user specified in hpe3par_username `hpe3par_snapshot_expiration = ` string value The time in hours when a snapshot expires and is deleted. This must be larger than expiration `hpe3par_snapshot_retention = ` string value The time in hours to retain a snapshot. You can't delete it before this expires. `hpe3par_target_nsp = ` string value The nsp of 3PAR backend to be used when: (1) multipath is not enabled in cinder.conf. (2) Fiber Channel Zone Manager is not used. (3) the 3PAR backend is prezoned with this specific nsp only. For example if nsp is 2 1 2, the format of the option's value is 2:1:2 `hpe3par_username = ` string value 3PAR / Primera username with the edit role hpmsa_api_protocol = https string value HPMSA API interface protocol. hpmsa_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. hpmsa_pool_name = A string value Pool or Vdisk name to use for volume creation. hpmsa_pool_type = virtual string value linear (for Vdisk) or virtual (for Pool). hpmsa_verify_certificate = False boolean value Whether to verify HPMSA array SSL certificate. hpmsa_verify_certificate_path = None string value HPMSA array SSL certificate path. hypermetro_devices = None string value The remote device hypermetro will use. iet_conf = /etc/iet/ietd.conf string value DEPRECATED: IET configuration file ignore_pool_full_threshold = False boolean value Force LUN creation even if the full threshold of pool is reached. By default, the value is False. image_upload_use_cinder_backend = False boolean value If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service. image_upload_use_internal_tenant = False boolean value If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant. image_volume_cache_enabled = False boolean value Enable the image volume cache for this backend. image_volume_cache_max_count = 0 integer value Max number of entries allowed in the image volume cache. 0 ⇒ unlimited. image_volume_cache_max_size_gb = 0 integer value Max size of the image volume cache for this backend in GB. 0 ⇒ unlimited. included_domain_ips = [] list value Comma separated Fault Domain IPs to be included from iSCSI returns. infinidat_iscsi_netspaces = [] list value List of names of network spaces to use for iSCSI connectivity infinidat_pool_name = None string value Name of the pool from which volumes are allocated infinidat_storage_protocol = fc string value Protocol for transferring data between host and storage back-end. infinidat_use_compression = False boolean value Specifies whether to turn on compression for newly created volumes. initiator_auto_deregistration = False boolean value Automatically deregister initiators after the related storage group is destroyed. By default, the value is False. initiator_auto_registration = False boolean value Automatically register initiators. By default, the value is False. initiator_check = False boolean value Use this value to enable the initiator_check. interval = 3 integer value Use this value to specify length of the interval in seconds. io_port_list = None list value Comma separated iSCSI or FC ports to be used in Nova or Cinder. iscsi_initiators = None string value Mapping between hostname and its iSCSI initiator IP addresses. iscsi_iotype = fileio string value Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device iscsi_secondary_ip_addresses = [] list value The list of secondary IP addresses of the iSCSI daemon `iscsi_target_flags = ` string value Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. iscsi_write_cache = on string value Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if target_helper is set to tgtadm. iser_helper = tgtadm string value The name of the iSER target user-land tool to use iser_ip_address = USDmy_ip string value The IP address that the iSER daemon is listening on iser_port = 3260 port value The port that the iSER daemon is listening on iser_target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSER volumes lenovo_api_protocol = https string value Lenovo api interface protocol. lenovo_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. lenovo_pool_name = A string value Pool or Vdisk name to use for volume creation. lenovo_pool_type = virtual string value linear (for VDisk) or virtual (for Pool). lenovo_verify_certificate = False boolean value Whether to verify Lenovo array SSL certificate. lenovo_verify_certificate_path = None string value Lenovo array SSL certificate path. linstor_autoplace_count = 0 integer value Autoplace replication count on volume deployment. 0 = Full cluster replication without autoplace, 1 = Single node deployment without replication, 2 or greater = Replicated deployment with autoplace. linstor_controller_diskless = True boolean value True means Cinder node is a diskless LINSTOR node. linstor_default_blocksize = 4096 integer value Default Block size for Image restoration. When using iSCSI transport, this option specifies the block size. linstor_default_storage_pool_name = DfltStorPool string value Default Storage Pool name for LINSTOR. linstor_default_uri = linstor://localhost string value Default storage URI for LINSTOR. linstor_default_volume_group_name = drbd-vg string value Default Volume Group name for LINSTOR. Not Cinder Volume. linstor_volume_downsize_factor = 4096 floating point value Default volume downscale size in KiB = 4 MiB. load_balance = False boolean value Enable/disable load balancing for a PowerMax backend. load_balance_real_time = False boolean value Enable/disable real-time performance metrics for Port level load balancing for a PowerMax backend. load_data_format = Avg string value Performance data format, not applicable for real-time metrics. Available options are "avg" and "max". load_look_back = 60 integer value How far in minutes to look back for diagnostic performance metrics in load calculation, minimum of 0 maximum of 1440 (24 hours). load_look_back_real_time = 1 integer value How far in minutes to look back for real-time performance metrics in load calculation, minimum of 1 maximum of 10. `lss_range_for_cg = ` string value Reserve LSSs for consistency group. lvm_conf_file = /etc/cinder/lvm.conf string value LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify None to not use a conf file even if one exists). lvm_mirrors = 0 integer value If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space lvm_suppress_fd_warnings = False boolean value Suppress leaked file descriptor warnings in LVM commands. lvm_type = auto string value Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported. macrosan_client = None list value Macrosan iscsi_clients list. You can configure multiple clients. You can configure it in this format: (host; client_name; sp1_iscsi_port; sp2_iscsi_port), (host; client_name; sp1_iscsi_port; sp2_iscsi_port) Important warning, Client_name has the following requirements: [a-zA-Z0-9.-_:], the maximum number of characters is 31 E.g: (controller1; device1; eth-1:0; eth-2:0), (controller2; device2; eth-1:0/eth-1:1; eth-2:0/eth-2:1), macrosan_client_default = None string value This is the default connection ports' name for iscsi. This default configuration is used when no host related information is obtained.E.g: eth-1:0/eth-1:1; eth-2:0/eth-2:1 macrosan_fc_keep_mapped_ports = True boolean value In the case of an FC connection, the configuration item associated with the port is maintained. macrosan_fc_use_sp_port_nr = 1 integer value The use_sp_port_nr parameter is the number of online FC ports used by the single-ended memory when the FC connection is established in the switch non-all-pass mode. The maximum is 4 macrosan_force_unmap_itl = True boolean value Force disconnect while deleting volume macrosan_log_timing = True boolean value Whether enable log timing macrosan_pool = None string value Pool to use for volume creation macrosan_replication_destination_ports = None list value Slave device macrosan_replication_ipaddrs = None list value MacroSAN replication devices' ip addresses macrosan_replication_password = None string value MacroSAN replication devices' password macrosan_replication_username = None string value MacroSAN replication devices' username macrosan_sdas_ipaddrs = None list value MacroSAN sdas devices' ip addresses macrosan_sdas_password = None string value MacroSAN sdas devices' password macrosan_sdas_username = None string value MacroSAN sdas devices' username macrosan_snapshot_resource_ratio = 1.0 floating point value Set snapshot's resource ratio macrosan_thin_lun_extent_size = 8 integer value Set the thin lun's extent size macrosan_thin_lun_high_watermark = 20 integer value Set the thin lun's high watermark macrosan_thin_lun_low_watermark = 5 integer value Set the thin lun's low watermark `management_ips = ` string value List of Management IP addresses (separated by commas) max_luns_per_storage_group = 255 integer value Default max number of LUNs in a storage group. By default, the value is 255. max_over_subscription_ratio = 20.0 string value Representation of the over subscription ratio when thin provisioning is enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. If ratio is auto , Cinder will automatically calculate the ratio based on the provisioned capacity and the used space. If not set to auto, the ratio has to be a minimum of 1.0. metro_domain_name = None string value The remote metro device domain name. metro_san_address = None string value The remote metro device request url. metro_san_password = None string value The remote metro device san password. metro_san_user = None string value The remote metro device san user. metro_storage_pools = None string value The remote metro device pool names. `nas_host = ` string value IP address or Hostname of NAS system. nas_login = admin string value User name to connect to NAS system. nas_mount_options = None string value Options used to mount the storage backend file system where Cinder volumes are stored. `nas_password = ` string value Password to connect to NAS system. `nas_private_key = ` string value Filename of private key to use for SSH authentication. nas_secure_file_operations = auto string value Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. nas_secure_file_permissions = auto string value Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. `nas_share_path = ` string value Path to the share to use for storing Cinder volumes. For example: "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 . nas_ssh_port = 22 port value SSH port to use to connect to NAS system. nas_volume_prov_type = thin string value Provisioning type that will be used when creating volumes. naviseccli_path = None string value Naviseccli Path. netapp_api_trace_pattern = (.*) string value A regular expression to limit the API tracing. This option is honored only if enabling api tracing with the trace_flags option. By default, all APIs will be traced. netapp_copyoffload_tool_path = None string value This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. netapp_host_type = None string value This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. netapp_login = None string value Administrative user account name used to access the storage system or proxy server. netapp_lun_ostype = None string value This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. netapp_lun_space_reservation = enabled string value This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. netapp_password = None string value Password for the administrative user account specified in the netapp_login option. netapp_pool_name_search_pattern = (.+) string value This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. netapp_replication_aggregate_map = None dict value Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol/FlexGroup), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... netapp_replication_volume_online_timeout = 360 integer value Sets time in seconds to wait for a replication volume create to complete and go online. netapp_server_hostname = None string value The hostname (or IP address) for the storage system or proxy server. netapp_server_port = None integer value The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS. netapp_size_multiplier = 1.2 floating point value The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. netapp_snapmirror_quiesce_timeout = 3600 integer value The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. netapp_storage_family = ontap_cluster string value The storage family type used on the storage system; the only valid value is ontap_cluster for using clustered Data ONTAP. netapp_storage_protocol = None string value The storage protocol to be used on the data path with the storage system. netapp_transport_type = http string value The transport protocol used when communicating with the storage system or proxy server. netapp_vserver = None string value This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. nexenta_blocksize = 4096 integer value Block size for datasets nexenta_chunksize = 32768 integer value NexentaEdge iSCSI LUN object chunk size `nexenta_client_address = ` string value NexentaEdge iSCSI Gateway client address for non-VIP service nexenta_dataset_compression = on string value Compression value for new ZFS folders. nexenta_dataset_dedup = off string value Deduplication value for new ZFS folders. `nexenta_dataset_description = ` string value Human-readable description for the folder. nexenta_encryption = False boolean value Defines whether NexentaEdge iSCSI LUN object has encryption enabled. `nexenta_folder = ` string value A folder where cinder created datasets will reside. nexenta_group_snapshot_template = group-snapshot-%s string value Template string to generate group snapshot name `nexenta_host = ` string value IP address of NexentaStor Appliance nexenta_host_group_prefix = cinder string value Prefix for iSCSI host groups on NexentaStor nexenta_iops_limit = 0 integer value NexentaEdge iSCSI LUN object IOPS limit `nexenta_iscsi_service = ` string value NexentaEdge iSCSI service name nexenta_iscsi_target_host_group = all string value Group of hosts which are allowed to access volumes `nexenta_iscsi_target_portal_groups = ` string value NexentaStor target portal groups nexenta_iscsi_target_portal_port = 3260 integer value Nexenta appliance iSCSI target portal port `nexenta_iscsi_target_portals = ` string value Comma separated list of portals for NexentaStor5, in format of IP1:port1,IP2:port2. Port is optional, default=3260. Example: 10.10.10.1:3267,10.10.1.2 nexenta_lu_writebackcache_disabled = False boolean value Postponed write to backing store or not `nexenta_lun_container = ` string value NexentaEdge logical path of bucket for LUNs nexenta_luns_per_target = 100 integer value Amount of LUNs per iSCSI target nexenta_mount_point_base = USDstate_path/mnt string value Base directory that contains NFS share mount points nexenta_nbd_symlinks_dir = /dev/disk/by-path string value NexentaEdge logical path of directory to store symbolic links to NBDs nexenta_nms_cache_volroot = True boolean value If set True cache NexentaStor appliance volroot option value. nexenta_ns5_blocksize = 32 integer value Block size for datasets nexenta_origin_snapshot_template = origin-snapshot-%s string value Template string to generate origin name of clone nexenta_password = nexenta string value Password to connect to NexentaStor management REST API server nexenta_qcow2_volumes = False boolean value Create volumes as QCOW2 files rather than raw files nexenta_replication_count = 3 integer value NexentaEdge iSCSI LUN object replication count. `nexenta_rest_address = ` string value IP address of NexentaStor management REST API endpoint nexenta_rest_backoff_factor = 0.5 floating point value Specifies the backoff factor to apply between connection attempts to NexentaStor management REST API server nexenta_rest_connect_timeout = 30 floating point value Specifies the time limit (in seconds), within which the connection to NexentaStor management REST API server must be established nexenta_rest_password = nexenta string value Password to connect to NexentaEdge. nexenta_rest_port = 0 integer value HTTP(S) port to connect to NexentaStor management REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used nexenta_rest_protocol = auto string value Use http or https for NexentaStor management REST API connection (default auto) nexenta_rest_read_timeout = 300 floating point value Specifies the time limit (in seconds), within which NexentaStor management REST API server must send a response nexenta_rest_retry_count = 3 integer value Specifies the number of times to repeat NexentaStor management REST API call in case of connection errors and NexentaStor appliance EBUSY or ENOENT errors nexenta_rest_user = admin string value User name to connect to NexentaEdge. nexenta_rrmgr_compression = 0 integer value Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression. nexenta_rrmgr_connections = 2 integer value Number of TCP connections. nexenta_rrmgr_tcp_buf_size = 4096 integer value TCP Buffer size in KiloBytes. nexenta_shares_config = /etc/cinder/nfs_shares string value File with the list of available nfs shares nexenta_sparse = False boolean value Enables or disables the creation of sparse datasets nexenta_sparsed_volumes = True boolean value Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time. nexenta_target_group_prefix = cinder string value Prefix for iSCSI target groups on NexentaStor nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder string value iqn prefix for NexentaStor iSCSI targets nexenta_use_https = True boolean value Use HTTP secure protocol for NexentaStor management REST API connections nexenta_user = admin string value User name to connect to NexentaStor management REST API server nexenta_volume = cinder string value NexentaStor pool name that holds all volumes nexenta_volume_group = iscsi string value Volume group for NexentaStor5 iSCSI nfs_mount_attempts = 3 integer value The number of attempts to mount NFS shares before raising an error. At least one attempt will be made to mount an NFS share, regardless of the value specified. nfs_mount_options = None string value Mount options passed to the NFS client. See the NFS(5) man page for details. nfs_mount_point_base = USDstate_path/mnt string value Base dir containing mount points for NFS shares. nfs_qcow2_volumes = False boolean value Create volumes as QCOW2 files rather than raw files. nfs_shares_config = /etc/cinder/nfs_shares string value File with the list of available NFS shares. nfs_snapshot_support = False boolean value Enable support for snapshots on the NFS driver. Platforms using libvirt <1.2.7 will encounter issues with this feature. nfs_sparsed_volumes = True boolean value Create volumes as sparsed files which take no space. If set to False volume is created as regular file. In such case volume creation takes a lot of time. nimble_pool_name = default string value Nimble Controller pool name nimble_subnet_label = * string value Nimble Subnet Label nimble_verify_cert_path = None string value Path to Nimble Array SSL certificate nimble_verify_certificate = False boolean value Whether to verify Nimble SSL Certificate num_iser_scan_tries = 3 integer value The maximum number of times to rescan iSER target to find volume num_shell_tries = 3 integer value Number of times to attempt to run flakey shell commands num_volume_device_scan_tries = 3 integer value The maximum number of times to rescan targets to find volume nvmet_ns_id = 10 integer value The namespace id associated with the subsystem that will be created with the path for the LVM volume. nvmet_port_id = 1 port value The port that the NVMe target is listening on. port_group_load_metric = PercentBusy string value Metric used for port group load calculation. port_load_metric = PercentBusy string value Metric used for port load calculation. powerflex_allow_migration_during_rebuild = False boolean value Allow volume migration during rebuild. powerflex_allow_non_padded_volumes = False boolean value Allow volumes to be created in Storage Pools when zero padding is disabled. This option should not be enabled if multiple tenants will utilize volumes from a shared Storage Pool. powerflex_max_over_subscription_ratio = 10.0 floating point value max_over_subscription_ratio setting for the driver. Maximum value allowed is 10.0. powerflex_rest_server_port = 443 port value Gateway REST server port. powerflex_round_volume_capacity = True boolean value Round volume sizes up to 8GB boundaries. PowerFlex/VxFlex OS requires volumes to be sized in multiples of 8GB. If set to False, volume creation will fail for volumes not sized properly powerflex_server_api_version = None string value PowerFlex/ScaleIO API version. This value should be left as the default value unless otherwise instructed by technical support. powerflex_storage_pools = None string value Storage Pools. Comma separated list of storage pools used to provide volumes. Each pool should be specified as a protection_domain_name:storage_pool_name value powerflex_unmap_volume_before_deletion = False boolean value Unmap volumes before deletion. powermax_array = None string value Serial number of the array to connect to. powermax_array_tag_list = None list value List of user assigned name for storage array. powermax_port_group_name_template = portGroupName string value User defined override for port group name. powermax_port_groups = None list value List of port groups containing frontend ports configured prior for server connection. powermax_service_level = None string value Service level to use for provisioning storage. Setting this as an extra spec in pool_name is preferable. powermax_short_host_name_template = shortHostName string value User defined override for short host name. powermax_srp = None string value Storage resource pool on array to use for provisioning. powerstore_appliances = [] list value Appliances names. Comma separated list of PowerStore appliances names used to provision volumes. Deprecated since: Wallaby *Reason:*Is not used anymore. PowerStore Load Balancer is used to provision volumes instead. powerstore_ports = [] list value Allowed ports. Comma separated list of PowerStore iSCSI IPs or FC WWNs (ex. 58:cc:f0:98:49:22:07:02) to be used. If option is not set all ports are allowed. proxy = cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy string value Proxy driver that connects to the IBM Storage Array pure_api_token = None string value REST API authorization token. pure_automatic_max_oversubscription_ratio = True boolean value Automatically determine an oversubscription ratio based on the current total data reduction values. If used this calculated value will override the max_over_subscription_ratio config option. pure_eradicate_on_delete = False boolean value When enabled, all Pure volumes, snapshots, and protection groups will be eradicated at the time of deletion in Cinder. Data will NOT be recoverable after a delete with this set to True! When disabled, volumes and snapshots will go into pending eradication state and can be recovered. pure_host_personality = None string value Determines how the Purity system tunes the protocol used between the array and the initiator. pure_iscsi_cidr = 0.0.0.0/0 string value CIDR of FlashArray iSCSI targets hosts are allowed to connect to. Default will allow connection to any IP address. pure_replica_interval_default = 3600 integer value Snapshot replication interval in seconds. pure_replica_retention_long_term_default = 7 integer value Retain snapshots per day on target for this time (in days.) pure_replica_retention_long_term_per_day_default = 3 integer value Retain how many snapshots for each day. pure_replica_retention_short_term_default = 14400 integer value Retain all snapshots on target for this time (in seconds.) pure_replication_pg_name = cinder-group string value Pure Protection Group name to use for async replication (will be created if it does not exist). pure_replication_pod_name = cinder-pod string value Pure Pod name to use for sync replication (will be created if it does not exist). pvme_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. pvme_pool_name = A string value Pool or Vdisk name to use for volume creation. qnap_management_url = None uri value The URL to management QNAP Storage. Driver does not support IPv6 address in URL. qnap_poolname = None string value The pool name in the QNAP Storage qnap_storage_protocol = iscsi string value Communication protocol to access QNAP storage quobyte_client_cfg = None string value Path to a Quobyte Client configuration file. quobyte_mount_point_base = USDstate_path/mnt string value Base dir containing the mount point for the Quobyte volume. quobyte_overlay_volumes = False boolean value Create new volumes from the volume_from_snapshot_cache by creating overlay files instead of full copies. This speeds up the creation of volumes from this cache. This feature requires the options quobyte_qcow2_volumes and quobyte_volume_from_snapshot_cache to be set to True. If one of these is set to False this option is ignored. quobyte_qcow2_volumes = True boolean value Create volumes as QCOW2 files rather than raw files. quobyte_sparsed_volumes = True boolean value Create volumes as sparse files which take no space. If set to False, volume is created as regular file. quobyte_volume_from_snapshot_cache = False boolean value Create a cache of volumes from merged snapshots to speed up creation of multiple volumes from a single snapshot. quobyte_volume_url = None string value Quobyte URL to the Quobyte volume using e.g. a DNS SRV record (preferred) or a host list (alternatively) like quobyte://<DIR host1>, <DIR host2>/<volume name> rados_connect_timeout = -1 integer value Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. rados_connection_interval = 5 integer value Interval value (in seconds) between connection retries to ceph cluster. rados_connection_retries = 3 integer value Number of retries if connection to ceph cluster failed. `rbd_ceph_conf = ` string value Path to the ceph configuration file rbd_cluster_name = ceph string value The name of ceph cluster rbd_exclusive_cinder_pool = True boolean value Set to False if the pool is shared with other usages. On exclusive use driver won't query images' provisioned size as they will match the value calculated by the Cinder core code for allocated_capacity_gb. This reduces the load on the Ceph cluster as well as on the volume service. On non exclusive use driver will query the Ceph cluster for per image used disk, this is an intensive operation having an independent request for each image. rbd_flatten_volume_from_snapshot = False boolean value Flatten volumes created from snapshots to remove dependency from volume to snapshot rbd_iscsi_api_debug = False boolean value Enable client request debugging. `rbd_iscsi_api_password = ` string value The username for the rbd_target_api service `rbd_iscsi_api_url = ` string value The url to the rbd_target_api service `rbd_iscsi_api_user = ` string value The username for the rbd_target_api service rbd_iscsi_target_iqn = None string value The preconfigured target_iqn on the iscsi gateway. rbd_max_clone_depth = 5 integer value Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. Note: lowering this value will not affect existing volumes whose clone depth exceeds the new value. rbd_pool = rbd string value The RADOS pool where rbd volumes are stored rbd_secret_uuid = None string value The libvirt uuid of the secret for the rbd_user volumes rbd_store_chunk_size = 4 integer value Volumes will be chunked into objects of this size (in megabytes). rbd_user = None string value The RADOS client name for accessing rbd volumes - only set when using cephx authentication remove_empty_host = False boolean value To remove the host from Unity when the last LUN is detached from it. By default, it is False. replication_connect_timeout = 5 integer value Timeout value (in seconds) used when connecting to ceph cluster to do a demotion/promotion of volumes. If value < 0, no timeout is set and default librados value is used. replication_device = None dict value Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2... report_discard_supported = False boolean value Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. report_dynamic_total_capacity = True boolean value Set to True for driver to report total capacity as a dynamic value (used + current free) and to False to report a static value (quota max bytes if defined and global size of cluster if not). reserved_percentage = 0 integer value The percentage of backend capacity is reserved retries = 200 integer value Use this value to specify number of retries. san_api_port = None port value Port to use to access the SAN API `san_clustername = ` string value Cluster name to use for creating volumes `san_ip = ` string value IP address of SAN controller san_is_local = False boolean value Execute commands locally instead of over SSH; use if the volume service is running on the SAN device san_login = admin string value Username for SAN controller `san_password = ` string value Password for SAN controller `san_private_key = ` string value Filename of private key to use for SSH authentication san_ssh_port = 22 port value SSH port to use with SAN san_thin_provision = True boolean value Use thin provisioning for SAN volumes? scst_target_driver = iscsi string value SCST target implementation can choose from multiple SCST target drivers. scst_target_iqn_name = None string value Certain ISCSI targets have predefined target names, SCST target driver uses this name. seagate_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. seagate_pool_name = A string value Pool or vdisk name to use for volume creation. seagate_pool_type = virtual string value linear (for vdisk) or virtual (for virtual pool). `secondary_san_ip = ` string value IP address of secondary DSM controller secondary_san_login = Admin string value Secondary DSM user name `secondary_san_password = ` string value Secondary DSM user password name secondary_sc_api_port = 3033 port value Secondary Dell API port sf_account_prefix = None string value Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname ( default behavior). The default is NO prefix. sf_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create sf_api_port = 443 port value SolidFire API port. Useful if the device api is behind a proxy on a different port. sf_api_request_timeout = 30 integer value Sets time in seconds to wait for an api request to complete. sf_cluster_pairing_timeout = 60 integer value Sets time in seconds to wait for clusters to complete pairing. sf_emulate_512 = True boolean value Set 512 byte emulation on volume creation; sf_enable_vag = False boolean value Utilize volume access groups on a per-tenant basis. sf_provisioning_calc = maxProvisionedSpace string value Change how SolidFire reports used space and provisioning calculations. If this parameter is set to usedSpace , the driver will report correct values as expected by Cinder thin provisioning. sf_svip = None string value Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud. sf_volume_clone_timeout = 600 integer value Sets time in seconds to wait for a clone of a volume or snapshot to complete. sf_volume_create_timeout = 60 integer value Sets time in seconds to wait for a create volume operation to complete. sf_volume_pairing_timeout = 3600 integer value Sets time in seconds to wait for a migrating volume to complete pairing and sync. sf_volume_prefix = UUID- string value Create SolidFire volumes with this prefix. Volume names are of the form <sf_volume_prefix><cinder-volume-id>. The default is to use a prefix of UUID- . smbfs_default_volume_format = vhd string value Default format that will be used when creating volumes if no volume format is specified. smbfs_mount_point_base = C:\OpenStack\_mnt string value Base dir containing mount points for smbfs shares. smbfs_pool_mappings = {} dict value Mappings between share locations and pool names. If not specified, the share names will be used as pool names. Example: //addr/share:pool_name,//addr/share2:pool_name2 smbfs_shares_config = C:\OpenStack\smbfs_shares.txt string value File with the list of available smbfs shares. spdk_max_queue_depth = 64 integer value Queue depth for rdma transport. spdk_rpc_ip = None string value The NVMe target remote configuration IP address. spdk_rpc_password = None string value The NVMe target remote configuration password. spdk_rpc_port = 8000 port value The NVMe target remote configuration port. spdk_rpc_protocol = http string value Protocol to be used with SPDK RPC proxy spdk_rpc_username = None string value The NVMe target remote configuration username. ssh_conn_timeout = 30 integer value SSH connection timeout in seconds ssh_max_pool_conn = 5 integer value Maximum ssh connections in the pool ssh_min_pool_conn = 1 integer value Minimum ssh connections in the pool storage_protocol = iscsi string value Protocol for transferring data between host and storage back-end. storage_vnx_authentication_type = global string value VNX authentication scope type. By default, the value is global. storage_vnx_pool_names = None list value Comma-separated list of storage pool names to be used. storage_vnx_security_file_dir = None string value Directory path that contains the VNX security file. Make sure the security file is generated first. storpool_replication = 3 integer value The default StorPool chain replication value. Used when creating a volume with no specified type if storpool_template is not set. Also used for calculating the apparent free space reported in the stats. storpool_template = None string value The StorPool template for volumes with no type. storwize_peer_pool = None string value Specifies the name of the peer pool for hyperswap volume, the peer pool must exist on the other site. storwize_preferred_host_site = {} dict value Specifies the site information for host. One WWPN or multi WWPNs used in the host can be specified. For example: storwize_preferred_host_site=site1:wwpn1,site2:wwpn2&wwpn3 or storwize_preferred_host_site=site1:iqn1,site2:iqn2 storwize_san_secondary_ip = None string value Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. storwize_svc_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create storwize_svc_flashcopy_rate = 50 integer value Specifies the Storwize FlashCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-150. storwize_svc_flashcopy_timeout = 120 integer value Maximum number of seconds to wait for FlashCopy to be prepared. storwize_svc_iscsi_chap_enabled = True boolean value Configure CHAP authentication for iSCSI connections (Default: Enabled) storwize_svc_mirror_pool = None string value Specifies the name of the pool in which mirrored copy is stored. Example: "pool2" storwize_svc_multihostmap_enabled = True boolean value This option no longer has any affect. It is deprecated and will be removed in the release. storwize_svc_multipath_enabled = False boolean value Connect with multipath (FC only; iSCSI multipath is controlled by Nova) storwize_svc_retain_aux_volume = False boolean value Enable or disable retaining of aux volume on secondary storage during delete of the volume on primary storage or moving the primary volume from mirror to non-mirror with replication enabled. This option is valid for Spectrum Virtualize Family. storwize_svc_stretched_cluster_partner = None string value If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2" storwize_svc_vol_autoexpand = True boolean value Storage system autoexpand parameter for volumes (True/False) storwize_svc_vol_compression = False boolean value Storage system compression option for volumes storwize_svc_vol_easytier = True boolean value Enable Easy Tier for volumes storwize_svc_vol_grainsize = 256 integer value Storage system grain size parameter for volumes (8/32/64/128/256) storwize_svc_vol_iogrp = 0 string value The I/O group in which to allocate volumes. It can be a comma-separated list in which case the driver will select an io_group based on least number of volumes associated with the io_group. storwize_svc_vol_nofmtdisk = False boolean value Specifies that the volume not be formatted during creation. storwize_svc_vol_rsize = 2 integer value Storage system space-efficiency parameter for volumes (percentage) storwize_svc_vol_warning = 0 integer value Storage system threshold for volume capacity warnings (percentage) storwize_svc_volpool_name = ['volpool'] list value Comma separated list of storage system storage pools for volumes. suppress_requests_ssl_warnings = False boolean value Suppress requests library SSL certificate warnings. synology_admin_port = 5000 port value Management port for Synology storage. synology_device_id = None string value Device id for skip one time password check for logging in Synology storage if OTP is enabled. synology_one_time_pass = None string value One time password of administrator for logging in Synology storage if OTP is enabled. `synology_password = ` string value Password of administrator for logging in Synology storage. `synology_pool_name = ` string value Volume on Synology storage to be used for creating lun. synology_ssl_verify = True boolean value Do certificate validation or not if USDdriver_use_ssl is True synology_username = admin string value Administrator of Synology storage. target_helper = tgtadm string value Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, spdk-nvmeof for SPDK NVMe-oF, or fake for testing. Note: The IET driver is deprecated and will be removed in the V release. target_ip_address = USDmy_ip string value The IP address that the iSCSI daemon is listening on target_port = 3260 port value The port that the iSCSI daemon is listening on target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSCSI volumes target_protocol = iscsi string value Determines the target protocol for new volumes, created with tgtadm, lioadm and nvmet target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser", in case of nvmet target set to "nvmet_rdma". thres_avl_size_perc_start = 20 integer value If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. thres_avl_size_perc_stop = 60 integer value When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. trace_flags = None list value List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. u4p_failover_autofailback = True boolean value If the driver should automatically failback to the primary instance of Unisphere when a successful connection is re-established. u4p_failover_backoff_factor = 1 integer value A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). Retries will sleep for: {backoff factor} * (2 ^ ({number of total retries} - 1)) seconds. u4p_failover_retries = 3 integer value The maximum number of retries each connection should attempt. Note, this applies only to failed DNS lookups, socket connections and connection timeouts, never to requests where data has made it to the server. u4p_failover_target = None dict value Dictionary of Unisphere failover target info. u4p_failover_timeout = 20.0 integer value How long to wait for the server to send data before giving up. unique_fqdn_network = True boolean value Whether or not our private network has unique FQDN on each initiator or not. For example networks with QA systems usually have multiple servers/VMs with the same FQDN. When true this will create host entries on 3PAR using the FQDN, when false it will use the reversed IQN/WWNN. unity_io_ports = [] list value A comma-separated list of iSCSI or FC ports to be used. Each port can be Unix-style glob expressions. unity_storage_pool_names = [] list value A comma-separated list of storage pool names to be used. use_chap_auth = False boolean value Option to enable/disable CHAP authentication for targets. use_multipath_for_image_xfer = False boolean value Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? This parameter needs to be configured for each backend section or in [backend_defaults] section as a common configuration for all backends. vmax_workload = None string value Workload, setting this as an extra spec in pool_name is preferable. vmware_adapter_type = lsiLogic string value Default adapter type to be used for attaching volumes. vmware_api_retry_count = 10 integer value Number of times VMware vCenter server API must be retried upon connection related issues. vmware_ca_file = None string value CA bundle file to use in verifying the vCenter server certificate. vmware_cluster_name = None multi valued Name of a vCenter compute cluster where volumes should be created. vmware_connection_pool_size = 10 integer value Maximum number of connections in http connection pool. vmware_datastore_regex = None string value Regular expression pattern to match the name of datastores where backend volumes are created. vmware_enable_volume_stats = False boolean value If true, this enables the fetching of the volume stats from the backend. This has potential performance issues at scale. When False, the driver will not collect ANY stats about the backend. vmware_host_ip = None string value IP address for connecting to VMware vCenter server. vmware_host_password = None string value Password for authenticating with VMware vCenter server. vmware_host_port = 443 port value Port number for connecting to VMware vCenter server. vmware_host_username = None string value Username for authenticating with VMware vCenter server. vmware_host_version = None string value Optional string specifying the VMware vCenter server version. The driver attempts to retrieve the version from VMware vCenter server. Set this configuration only if you want to override the vCenter server version. vmware_image_transfer_timeout_secs = 7200 integer value Timeout in seconds for VMDK volume transfer between Cinder and Glance. vmware_insecure = False boolean value If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if "vmware_ca_file" is set. vmware_lazy_create = True boolean value If true, the backend volume in vCenter server is created lazily when the volume is created without any source. The backend volume is created when the volume is attached, uploaded to image service or during backup. vmware_max_objects_retrieval = 100 integer value Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. vmware_snapshot_format = template string value Volume snapshot format in vCenter server. vmware_storage_profile = None multi valued Names of storage profiles to be monitored. Only used when vmware_enable_volume_stats is True. vmware_task_poll_interval = 2.0 floating point value The interval (in seconds) for polling remote tasks invoked on VMware vCenter server. vmware_tmp_dir = /tmp string value Directory where virtual disks are stored during volume backup and restore. vmware_volume_folder = Volumes string value Name of the vCenter inventory folder that will contain Cinder volumes. This folder will be created under "OpenStack/<project_folder>", where project_folder is of format "Project (<volume_project_id>)". vmware_wsdl_location = None string value Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl . Optional over-ride to default location for bug work-arounds. vnx_async_migrate = True boolean value Always use asynchronous migration during volume cloning and creating from snapshot. As described in configuration doc, async migration has some constraints. Besides using metadata, customers could use this option to disable async migration. Be aware that async_migrate in metadata overrides this option when both are set. By default, the value is True. volume_backend_name = None string value The backend name for a given driver implementation volume_clear = zero string value Method used to wipe old volumes volume_clear_ionice = None string value The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority. volume_clear_size = 0 integer value Size in MiB to wipe at start of old volumes. 1024 MiB at max. 0 ⇒ all volume_copy_blkio_cgroup_name = cinder-volume-copy string value The blkio cgroup name to be used to limit bandwidth of volume copy volume_copy_bps_limit = 0 integer value The upper limit of bandwidth of volume copy. 0 ⇒ unlimited volume_dd_blocksize = 1M string value The default block size used when copying/clearing volumes volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver string value Driver to use for volume creation volume_group = cinder-volumes string value Name for the VG that will contain exported volumes volumes_dir = USDstate_path/volumes string value Volume configuration file storage directory vxflexos_allow_migration_during_rebuild = False boolean value renamed to powerflex_allow_migration_during_rebuild. vxflexos_allow_non_padded_volumes = False boolean value renamed to powerflex_allow_non_padded_volumes. vxflexos_max_over_subscription_ratio = 10.0 floating point value renamed to powerflex_max_over_subscription_ratio. vxflexos_rest_server_port = 443 port value renamed to powerflex_rest_server_port. vxflexos_round_volume_capacity = True boolean value renamed to powerflex_round_volume_capacity. vxflexos_server_api_version = None string value renamed to powerflex_server_api_version. vxflexos_storage_pools = None string value renamed to powerflex_storage_pools. vxflexos_unmap_volume_before_deletion = False boolean value renamed to powerflex_round_volume_capacity. vzstorage_default_volume_format = raw string value Default format that will be used when creating volumes if no volume format is specified. vzstorage_mount_options = None list value Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details. vzstorage_mount_point_base = USDstate_path/mnt string value Base dir containing mount points for vzstorage shares. vzstorage_shares_config = /etc/cinder/vzstorage_shares string value File with the list of available vzstorage shares. vzstorage_sparsed_volumes = True boolean value Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. vzstorage_used_ratio = 0.95 floating point value Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. windows_iscsi_lun_path = C:\iSCSIVirtualDisks string value Path to store VHD backed volumes xtremio_array_busy_retry_count = 5 integer value Number of retries in case array is busy xtremio_array_busy_retry_interval = 5 integer value Interval between retries in case array is busy xtremio_clean_unused_ig = False boolean value Should the driver remove initiator groups with no volumes after the last connection was terminated. Since the behavior till now was to leave the IG be, we default to False (not deleting IGs without connected volumes); setting this parameter to True will remove any IG after terminating its connection to the last volume. `xtremio_cluster_name = ` string value XMS cluster id in multi-cluster environment xtremio_ports = [] list value Allowed ports. Comma separated list of XtremIO iSCSI IPs or FC WWNs (ex. 58:cc:f0:98:49:22:07:02) to be used. If option is not set all ports are allowed. xtremio_volumes_per_glance_cache = 100 integer value Number of volumes created from each cached glance image zadara_access_key = None string value VPSA access key zadara_default_snap_policy = False boolean value VPSA - Attach snapshot policy for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_gen3_vol_compress = False boolean value VPSA - Enable compression for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_gen3_vol_dedupe = False boolean value VPSA - Enable deduplication for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_ssl_cert_verify = True boolean value If set to True the http client will validate the SSL certificate of the VPSA endpoint. zadara_vol_encrypt = False boolean value VPSA - Default encryption policy for volumes. If the option is neither configured nor provided as metadata, the VPSA will inherit the default value. zadara_vpsa_host = None host address value VPSA - Management Host name or IP address zadara_vpsa_poolname = None string value VPSA - Storage Pool assigned for volumes zadara_vpsa_port = None port value VPSA - Port number zadara_vpsa_use_ssl = False boolean value VPSA - Use SSL connection 2.1.4. barbican The following table outlines the options available under the [barbican] group in the /etc/cinder/cinder.conf file. Table 2.3. barbican Configuration option = Default value Type Description auth_endpoint = http://localhost/identity/v3 string value Use this endpoint to connect to Keystone barbican_api_version = None string value Version of the Barbican API, for example: "v1" barbican_endpoint = None string value Use this endpoint to connect to Barbican, for example: "http://localhost:9311/" barbican_endpoint_type = public string value Specifies the type of endpoint. Allowed values are: public, private, and admin number_of_retries = 60 integer value Number of times to retry poll for key creation completion retry_delay = 1 integer value Number of seconds to wait before retrying poll for key creation completion verify_ssl = True boolean value Specifies if insecure TLS (https) requests. If False, the server's certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile. verify_ssl_path = None string value A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored. 2.1.5. brcd_fabric_example The following table outlines the options available under the [brcd_fabric_example] group in the /etc/cinder/cinder.conf file. Table 2.4. brcd_fabric_example Configuration option = Default value Type Description `fc_fabric_address = ` string value Management IP of fabric. `fc_fabric_password = ` string value Password for user. fc_fabric_port = 22 port value Connecting port `fc_fabric_ssh_cert_path = ` string value Local SSH certificate Path. `fc_fabric_user = ` string value Fabric user ID. fc_southbound_protocol = REST_HTTP string value South bound connector for the fabric. fc_virtual_fabric_id = None string value Virtual Fabric ID. zone_activate = True boolean value Overridden zoning activation state. zone_name_prefix = openstack string value Overridden zone name prefix. zoning_policy = initiator-target string value Overridden zoning policy. 2.1.6. cisco_fabric_example The following table outlines the options available under the [cisco_fabric_example] group in the /etc/cinder/cinder.conf file. Table 2.5. cisco_fabric_example Configuration option = Default value Type Description `cisco_fc_fabric_address = ` string value Management IP of fabric `cisco_fc_fabric_password = ` string value Password for user cisco_fc_fabric_port = 22 port value Connecting port `cisco_fc_fabric_user = ` string value Fabric user ID cisco_zone_activate = True boolean value overridden zoning activation state cisco_zone_name_prefix = None string value overridden zone name prefix cisco_zoning_policy = initiator-target string value overridden zoning policy cisco_zoning_vsan = None string value VSAN of the Fabric 2.1.7. coordination The following table outlines the options available under the [coordination] group in the /etc/cinder/cinder.conf file. Table 2.6. coordination Configuration option = Default value Type Description backend_url = file://USDstate_path string value The backend URL to use for distributed coordination. 2.1.8. cors The following table outlines the options available under the [cors] group in the /etc/cinder/cinder.conf file. Table 2.7. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID', 'X-Trace-Info', 'X-Trace-HMAC', 'OpenStack-API-Version'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH', 'HEAD'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID', 'OpenStack-API-Version'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 2.1.9. database The following table outlines the options available under the [database] group in the /etc/cinder/cinder.conf file. Table 2.8. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 2.1.10. fc-zone-manager The following table outlines the options available under the [fc-zone-manager] group in the /etc/cinder/cinder.conf file. Table 2.9. fc-zone-manager Configuration option = Default value Type Description brcd_sb_connector = HTTP string value South bound connector for zoning operation cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI string value Southbound connector for zoning operation enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. fc_fabric_names = None string value Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService string value FC SAN Lookup Service zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver string value FC Zone Driver responsible for zone management zoning_policy = initiator-target string value Zoning policy configured by user; valid values include "initiator-target" or "initiator" 2.1.11. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/cinder/cinder.conf file. Table 2.10. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 2.1.12. key_manager The following table outlines the options available under the [key_manager] group in the /etc/cinder/cinder.conf file. Table 2.11. key_manager Configuration option = Default value Type Description auth_type = None string value The type of authentication credential to create. Possible values are token , password , keystone_token , and keystone_password . Required if no context is passed to the credential factory. auth_url = None string value Use this endpoint to connect to Keystone. backend = barbican string value Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time. domain_id = None string value Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type. domain_name = None string value Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type. fixed_key = None string value Fixed key returned by key manager, specified in hex password = None string value Password for authentication. Required for password and keystone_password auth_type. project_domain_id = None string value Project's domain ID for project. Optional for keystone_token and keystone_password auth_type. project_domain_name = None string value Project's domain name for project. Optional for keystone_token and keystone_password auth_type. project_id = None string value Project ID for project scoping. Optional for keystone_token and keystone_password auth_type. project_name = None string value Project name for project scoping. Optional for keystone_token and keystone_password auth_type. reauthenticate = True boolean value Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type. token = None string value Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory. trust_id = None string value Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type. user_domain_id = None string value User's domain ID for authentication. Optional for keystone_token and keystone_password auth_type. user_domain_name = None string value User's domain name for authentication. Optional for keystone_token and keystone_password auth_type. user_id = None string value User ID for authentication. Optional for keystone_token and keystone_password auth_type. username = None string value Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type. 2.1.13. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/cinder/cinder.conf file. Table 2.12. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 2.1.14. nova The following table outlines the options available under the [nova] group in the /etc/cinder/cinder.conf file. Table 2.13. nova Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. insecure = False boolean value Verify HTTPS connections. interface = public string value Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. keyfile = None string value PEM encoded client certificate key file region_name = None string value Name of nova region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. timeout = None integer value Timeout value for http requests token_auth_url = None string value The authentication URL for the nova connection when using the current users token 2.1.15. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/cinder/cinder.conf file. Table 2.14. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 2.1.16. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/cinder/cinder.conf file. Table 2.15. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 2.1.17. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/cinder/cinder.conf file. Table 2.16. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 2.1.18. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/cinder/cinder.conf file. Table 2.17. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 2.1.19. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/cinder/cinder.conf file. Table 2.18. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = True boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 2.1.20. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/cinder/cinder.conf file. Table 2.19. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 2.1.21. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/cinder/cinder.conf file. Table 2.20. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 2.1.22. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/cinder/cinder.conf file. Table 2.21. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 2.1.23. oslo_versionedobjects The following table outlines the options available under the [oslo_versionedobjects] group in the /etc/cinder/cinder.conf file. Table 2.22. oslo_versionedobjects Configuration option = Default value Type Description fatal_exception_format_errors = False boolean value Make exception message format errors fatal 2.1.24. privsep The following table outlines the options available under the [privsep] group in the /etc/cinder/cinder.conf file. Table 2.23. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 2.1.25. profiler The following table outlines the options available under the [profiler] group in the /etc/cinder/cinder.conf file. Table 2.24. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 2.1.26. sample_castellan_source The following table outlines the options available under the [sample_castellan_source] group in the /etc/cinder/cinder.conf file. Table 2.25. sample_castellan_source Configuration option = Default value Type Description config_file = None string value The path to a castellan configuration file. driver = None string value The name of the driver that can load this configuration source. mapping_file = None string value The path to a configuration/castellan_id mapping file. 2.1.27. sample_remote_file_source The following table outlines the options available under the [sample_remote_file_source] group in the /etc/cinder/cinder.conf file. Table 2.26. sample_remote_file_source Configuration option = Default value Type Description ca_path = None string value The path to a CA_BUNDLE file or directory with certificates of trusted CAs. client_cert = None string value Client side certificate, as a single file path containing either the certificate only or the private key and the certificate. client_key = None string value Client side private key, in case client_cert is specified but does not includes the private key. driver = None string value The name of the driver that can load this configuration source. uri = None uri value Required option with the URI of the extra configuration file's location. 2.1.28. service_user The following table outlines the options available under the [service_user] group in the /etc/cinder/cinder.conf file. Table 2.27. service_user Configuration option = Default value Type Description auth-url = None string value Authentication URL cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to send_service_user_token = False boolean value When True, if sending a user token to an REST API, also send a service token. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 2.1.29. ssl The following table outlines the options available under the [ssl] group in the /etc/cinder/cinder.conf file. Table 2.28. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 2.1.30. vault The following table outlines the options available under the [vault] group in the /etc/cinder/cinder.conf file. Table 2.29. vault Configuration option = Default value Type Description approle_role_id = None string value AppRole role_id for authentication with vault approle_secret_id = None string value AppRole secret_id for authentication with vault kv_mountpoint = secret string value Mountpoint of KV store in Vault to use, for example: secret kv_version = 2 integer value Version of KV store in Vault to use, for example: 2 root_token_id = None string value root token for vault ssl_ca_crt_file = None string value Absolute path to ca cert file use_ssl = False boolean value SSL Enabled/Disabled vault_url = http://127.0.0.1:8200 string value Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuration_reference/cinder |
Chapter 7. Managing locations | Chapter 7. Managing locations Locations function similar to organizations: they provide a method to group resources and assign hosts. Organizations and locations have the following conceptual differences: Locations are based on physical or geographical settings. Locations have a hierarchical structure. 7.1. Creating a location Use this procedure to create a location so that you can manage your hosts and resources by location. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Locations . Click New Location . Optional: From the Parent list, select a parent location. This creates a location hierarchy. In the Name field, enter a name for the location. Optional: In the Description field, enter a description for the location. Click Submit . If you have hosts with no location assigned, add any hosts that you want to assign to the new location, then click Proceed to Edit . Assign any infrastructure resources that you want to add to the location. This includes networking resources, installation media, Kickstart templates, and other parameters. You can return to this page at any time by navigating to Administer > Locations and then selecting a location to edit. Click Submit to save your changes. CLI procedure Enter the following command to create a location: 7.2. Creating multiple locations The following example Bash script creates three locations - London, Munich, Boston - and assigns them to the Example Organization. ORG=" Example Organization " LOCATIONS=" London Munich Boston " for LOC in USD{LOCATIONS} do hammer location create --name "USD{LOC}" hammer location add-organization --name "USD{LOC}" --organization "USD{ORG}" done 7.3. Setting the location context A location context defines the location to use for a host and its associated resources. Procedure The location menu is the second menu item in the menu bar, on the upper left of the Satellite web UI. If you have not selected a current location, the menu displays Any Location . Click Any location and select the location to use. CLI procedure While using the CLI, include either --location " My_Location " or --location-id " My_Location_ID " as an option. For example: This command lists hosts associated with the My_Location location. 7.4. Deleting a location You can delete a location if the location is not associated with any lifecycle environments or host groups. If there are any lifecycle environments or host groups associated with the location you are about to delete, remove them by navigating to Administer > Locations and clicking the relevant location. Do not delete the default location created during installation because the default location is a placeholder for any unassociated hosts in the Satellite environment. There must be at least one location in the environment at any given time. Procedure In the Satellite web UI, navigate to Administer > Locations . Select Delete from the list to the right of the name of the location you want to delete. Click OK to delete the location. CLI procedure Enter the following command to retrieve the ID of the location that you want to delete: From the output, note the ID of the location that you want to delete. Enter the following command to delete the location: | [
"hammer location create --description \" My_Location_Description \" --name \" My_Location \" --parent-id \" My_Location_Parent_ID \"",
"ORG=\" Example Organization \" LOCATIONS=\" London Munich Boston \" for LOC in USD{LOCATIONS} do hammer location create --name \"USD{LOC}\" hammer location add-organization --name \"USD{LOC}\" --organization \"USD{ORG}\" done",
"hammer host list --location \" My_Location \"",
"hammer location list",
"hammer location delete --id Location ID"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/managing_locations_admin |
Chapter 15. Provisioning Cloud Instances on Google Compute Engine | Chapter 15. Provisioning Cloud Instances on Google Compute Engine Red Hat Satellite can interact with Google Compute Engine (GCE), including creating new virtual machines and controlling their power management states. You can only use golden images supported by Red Hat with Satellite for creating GCE hosts. Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in the Content Management Guide . Provide an activation key for host registration. For more information, see Creating An Activation Key in the Content Management guide. In your GCE project, configure a service account with the necessary IAM Compute role. For more information, see Compute Engine IAM roles in the GCE documentation. In your GCE project-wise metadata, set the enable-oslogin to FALSE . For more information, see Enabling or disabling OS Login in the GCE documentation. Optional: If you want to use Puppet with GCE hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs. Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information about provisioning templates, see Provisioning Templates in Provisioning Hosts . 15.1. Installing Google GCE Plugin Install the Google GCE plugin to attach an GCE compute resource provider to Satellite. This allows you to manage and deploy hosts to GCE. Procedure Install the Google GCE compute resource provider on your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the Compute Resources tab to verify the installation of the Google GCE plugin. 15.2. Adding a Google GCE Connection to Satellite Server Use this procedure to add Google Compute Engine (GCE) as a compute resource in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In Google GCE, generate a service account key in JSON format. Copy the file from your local machine to Satellite Server: On Satellite Server, change the owner for your service account key to the foreman user: On Satellite Server, configure permissions for your service account key to ensure that the file is readable: On Satellite Server, restore SELinux context for your service account key: In the Satellite web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource . In the Name field, enter a name for the compute resource. From the Provider list, select Google . Optional: In the Description field, enter a description for the resource. In the Google Project ID field, enter the project ID. In the Client Email field, enter the client email. In the Certificate Path field, enter the path to the service account key. For example, /usr/share/foreman/ gce_key .json . Click Load Zones to populate the list of zones from your GCE environment. From the Zone list, select the GCE zone to use. Click Submit . CLI procedure In Google GCE, generate a service account key in JSON format. Copy the file from your local machine to Satellite Server: On Satellite Server, change the owner for your service account key to the foreman user: On Satellite Server, configure permissions for your service account key to ensure that the file is readable: On Satellite Server, restore SELinux context for your service account key: Use the hammer compute-resource create command to add a GCE compute resource to Satellite: 15.3. Adding Google Compute Engine Images to Satellite Server To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Satellite Server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click the name of the Google Compute Engine connection. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. Specify a user other than root , because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers. From the Image list, select an image from the Google Compute Engine compute resource. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. With the --username option, specify a user other than root , because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers. 15.4. Adding Google GCE Details to a Compute Profile Use this procedure to add Google GCE hardware settings to a compute profile. When you create a host on Google GCE using this compute profile, these settings are automatically populated. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile , enter a Name , and click Submit . Click the name of the GCE compute resource. From the Machine Type list, select the machine type to use for provisioning. From the Image list, select the image to use for provisioning. From the Network list, select the Google GCE network to use for provisioning. Optional: Select the Associate Ephemeral External IP checkbox to assign a dynamic ephemeral IP address that Satellite uses to communicate with the host. This public IP address changes when you reboot the host. If you need a permanent IP address, reserve a static public IP address on Google GCE and attach it to the host. In the Size (GB) field, enter the size of the storage to create on the host. Click Submit to save the compute profile. CLI procedure Create a compute profile to use with the Google GCE compute resource: Add GCE details to the compute profile: 15.5. Creating Image-based Hosts on Google Compute Engine In Satellite, you can use Google Compute Engine provisioning to create hosts from an existing image. The new host entry triggers the Google Compute Engine server to create the instance using the pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context. From the Host Group list, select the host group that you want to use to populate the form. From the Deploy on list, select the Google Compute Engine connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. From the Lifecycle Environment list, select the environment. Click the Interfaces tab and click Edit on the host's interface. Verify that the fields are automatically populated, particularly the following items: The Name from the Host tab becomes the DNS name . The MAC address field is blank. Google Compute Engine assigns a MAC address to the host during provisioning. Satellite Server automatically assigns an IP address for the new host. The Domain field is populated with the required domain. The Managed , Primary , and Provision options are automatically selected for the first interface on the host. If not, select them. Click the Operating System tab, and confirm that all fields automatically contain values. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host entry. CLI procedure Create the host with the hammer host create command and include --provision-method image . Replace the values in the following example with the appropriate values for your environment. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 15.6. Deleting a VM on Google GCE You can delete VMs running on Google GCE on your Satellite Server. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Google GCE provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Google GCE compute resource while retaining any associated hosts within Satellite. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually. 15.7. Uninstalling Google GCE Plugin If you have previously installed the Google GCE plugin but don't use it anymore to manage and deploy hosts to GCE, you can uninstall it from your Satellite Server. Procedure Uninstall the GCE compute resource provider from your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the Available Providers tab to verify the removal of the Google GCE plugin. | [
"satellite-installer --enable-foreman-compute-gce",
"scp gce_key.json [email protected]:/usr/share/foreman/gce_key.json",
"chown foreman /usr/share/foreman/ gce_key .json",
"chmod 0600 /usr/share/foreman/ gce_key .json",
"restorecon -vv /usr/share/foreman/ gce_key .json",
"scp gce_key.json [email protected]:/usr/share/foreman/gce_key.json",
"chown foreman /usr/share/foreman/ gce_key .json",
"chmod 0600 /usr/share/foreman/ gce_key .json",
"restorecon -vv /usr/share/foreman/ gce_key .json",
"hammer compute-resource create --email \" My_GCE_Email \" --key-path \" Path_To_My_GCE_Key.json \" --name \" My_GCE_Compute_Resource \" --project \" My_GCE_Project_ID \" --provider \"gce\" --zone \" My_Zone \"",
"hammer compute-resource image create --name ' gce_image_name ' --compute-resource ' gce_cr ' --operatingsystem-id 1 --architecture-id 1 --uuid ' 3780108136525169178 ' --username ' admin '",
"hammer compute-profile create --name My_GCE_Compute_Profile",
"hammer compute-profile values create --compute-attributes \"machine_type=f1-micro,associate_external_ip=true,network=default\" --compute-profile \" My_GCE_Compute_Profile \" --compute-resource \" My_GCE_Compute_Resource \" --volume \" size_gb=20 \"",
"hammer host create --architecture x86_64 --compute-profile \" gce_profile_name \" --compute-resource \" My_GCE_Compute_Resource \" --image \" My_GCE_Image \" --interface \"type=interface,domain_id=1,managed=true,primary=true,provision=true\" --location \" My_Location \" --name \" GCE_VM \" --operatingsystem \" My_Operating_System \" --organization \" My_Organization \" --provision-method 'image' --puppet-ca-proxy-id 1 --puppet-environment-id 1 --puppet-proxy-id 1 --root-password \" My_Root_Password \"",
"yum remove -y foreman-gce satellite-installer --no-enable-foreman-compute-gce"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/Provisioning_Cloud_Instances_on_Google_Compute_Engine_provisioning |
4.8. Virtualization | 4.8. Virtualization Red Hat Enterprise Linux 6.6 Hosted as a Generation 2 Virtual Machine As a Technology Preview, Red Hat Enterprise Linux 6.6 can be used as a generation 2 virtual machine in the Microsoft Hyper-V Server 2012 R2 host. In addition to the functions supported in the generation, generation 2 provides new functions on a virtual machine; for example: boot from a SCSI virtual hard disk, and UEFI firmware support. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/virtualization_tp |
Preface | Preface Red Hat Enterprise Linux minor releases are an aggregation of individual security, enhancement, and bug fix errata. The Red Hat Enterprise Linux 7.3 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. Capabilities and limits of Red Hat Enterprise Linux 7 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/pref-release_notes-preface |
Chapter 6. Kafka Connect configuration properties | Chapter 6. Kafka Connect configuration properties config.storage.topic Type: string Importance: high The name of the Kafka topic where connector configurations are stored. group.id Type: string Importance: high A unique string that identifies the Connect cluster group this worker belongs to. key.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. offset.storage.topic Type: string Importance: high The name of the Kafka topic where source connector offsets are stored. status.storage.topic Type: string Importance: high The name of the Kafka topic where connector and task status are stored. value.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. bootstrap.servers Type: list Default: localhost:9092 Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). exactly.once.source.support Type: string Default: disabled Valid Values: (case insensitive) [DISABLED, ENABLED, PREPARING] Importance: high Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and their source offsets, and by proactively fencing out old task generations before bringing up new ones. To enable exactly-once source support on a new cluster, set this property to 'enabled'. To enable support on an existing cluster, first set to 'preparing' on every worker in the cluster, then set to 'enabled'. A rolling upgrade may be used for both changes. For more information on this feature, see the exactly-once source support documentation . heartbeat.interval.ms Type: int Default: 3000 (3 seconds) Importance: high The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. rebalance.timeout.ms Type: int Default: 60000 (1 minute) Importance: high The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures. session.timeout.ms Type: int Default: 10000 (10 seconds) Importance: high The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms . ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. connector.client.config.override.policy Type: string Default: All Importance: medium Class name or alias of implementation of ConnectorClientConfigOverridePolicy . Defines what client configurations can be overridden by the connector. The default implementation is All , meaning connector configurations can override all client properties. The other possible policies in the framework include None to disallow connectors from overriding client properties, and Principal to allow connectors to override only client principals. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 40000 (40 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. worker.sync.timeout.ms Type: int Default: 3000 (3 seconds) Importance: medium When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. worker.unsync.backoff.ms Type: int Default: 300000 (5 minutes) Importance: medium When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining. access.control.allow.methods Type: string Default: "" Importance: low Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. access.control.allow.origin Type: string Default: "" Importance: low Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API. admin.listeners Type: list Default: null Valid Values: List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443 . Importance: low List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property). auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. client.id Type: string Default: "" Importance: low An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. config.providers Type: list Default: "" Importance: low Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvider allows you to replace variable references in connector configurations, such as for externalized secrets. config.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the configuration storage topic. connect.protocol Type: string Default: sessioned Valid Values: [eager, compatible, sessioned] Importance: low Compatibility mode for Kafka Connect Protocol. header.converter Type: class Default: org.apache.kafka.connect.storage.SimpleHeaderConverter Importance: low HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas. inter.worker.key.generation.algorithm Type: string Default: HmacSHA256 Valid Values: Any KeyGenerator algorithm supported by the worker JVM Importance: low The algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. inter.worker.key.size Type: int Default: null Importance: low The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used. inter.worker.key.ttl.ms Type: int Default: 3600000 (1 hour) Valid Values: [0,... ,2147483647] Importance: low The TTL of generated session keys used for internal request validation (in milliseconds). inter.worker.signature.algorithm Type: string Default: HmacSHA256 Valid Values: Any MAC algorithm supported by the worker JVM Importance: low The algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. inter.worker.verification.algorithms Type: list Default: HmacSHA256 Valid Values: A list of one or more MAC algorithms, each supported by the worker JVM Importance: low A list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signature.algorithm property. The algorithm(s) '[HmacSHA256]' will be used as a default on JVMs that provide them; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. listeners Type: list Default: http://:8083 Valid Values: List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443 . Importance: low List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. offset.flush.interval.ms Type: long Default: 60000 (1 minute) Importance: low Interval at which to try committing offsets for tasks. offset.flush.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. This property has no effect for source connectors running with exactly-once support. offset.storage.partitions Type: int Default: 25 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the offset storage topic. offset.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the offset storage topic. plugin.discovery Type: string Default: hybrid_warn Valid Values: (case insensitive) [ONLY_SCAN, SERVICE_LOAD, HYBRID_WARN, HYBRID_FAIL] Importance: low Method to use to discover plugins present in the classpath and plugin.path configuration. This can be one of multiple values with the following meanings: * only_scan: Discover plugins only by reflection. Plugins which are not discoverable by ServiceLoader will not impact worker startup. * hybrid_warn: Discover plugins reflectively and by ServiceLoader. Plugins which are not discoverable by ServiceLoader will print warnings during worker startup. * hybrid_fail: Discover plugins reflectively and by ServiceLoader. Plugins which are not discoverable by ServiceLoader will cause worker startup to fail. * service_load: Discover plugins only by ServiceLoader. Faster startup than other modes. Plugins which are not discoverable by ServiceLoader may not be usable. plugin.path Type: list Default: null Importance: low List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: a) directories immediately containing jars with plugins and their dependencies b) uber-jars with plugins and their dependencies c) directories immediately containing the package directory structure of classes of plugins and their dependencies Note: symlinks will be followed to discover dependencies or plugins. Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. response.http.headers.config Type: string Default: "" Valid Values: Comma-separated header rules, where each header rule is of the form '[action] [header name]:[header value]' and optionally surrounded by double quotes if any part of a header rule contains a comma Importance: low Rules for REST API HTTP response headers. rest.advertised.host.name Type: string Default: null Importance: low If this is set, this is the hostname that will be given out to other workers to connect to. rest.advertised.listener Type: string Default: null Importance: low Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use. rest.advertised.port Type: int Default: null Importance: low If this is set, this is the port that will be given out to other workers to connect to. rest.extension.classes Type: list Default: "" Importance: low Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface ConnectRestExtension allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. retry.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms , then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. scheduled.rebalance.max.delay.ms Type: int Default: 300000 (5 minutes) Valid Values: [0,... ,2147483647] Importance: low The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Valid Values: [0,... ] Importance: low The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Valid Values: [required, requested, none] Importance: low Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. status.storage.partitions Type: int Default: 5 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the status storage topic. status.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the status storage topic. task.shutdown.graceful.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially. topic.creation.enable Type: boolean Default: true Importance: low Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with topic.creation. properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically. topic.tracking.allow.reset Type: boolean Default: true Importance: low If set to true, it allows user requests to reset the set of active topics per connector. topic.tracking.enable Type: boolean Default: true Importance: low Enable tracking the set of active topics per connector during runtime. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_properties/kafka-connect-configuration-properties-str |
Appendix B. Dashboard Builder | Appendix B. Dashboard Builder B.1. JBoss Dashboard Builder JBoss Dashboard Builder is an open source dashboard and reporting tool that allows: Visual configuration and personalization of dashboards. Graphical representation of KPIs (Key Performance Indicators). Definition of interactive report tables. Filtering and search, both in-memory or database based. Process execution metrics dashboards. Data extraction from external systems, through different protocols. Access control for different user profiles to different levels of information. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/appe-dashboard_builder |
Managing Hosts | Managing Hosts Red Hat Satellite 6.11 A guide to managing hosts in a Red Hat Satellite 6 environment. Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/index |
16.9. virt-inspector: Inspecting Guest Virtual Machines | 16.9. virt-inspector: Inspecting Guest Virtual Machines This section provides information about inspecting guest virtual machines using virt-inspector . 16.9.1. Introduction virt-inspector is a tool for inspecting a disk image to find out what operating system it contains. Note Red Hat Enterprise Linux 6.2 provides two variations of this program: virt-inspector is the original program as found in Red Hat Enterprise Linux 6.0 and is now deprecated upstream. virt-inspector2 is the same as the new upstream virt-inspector program. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virt-inspector |
Appendix A. List of tickets by component | Appendix A. List of tickets by component Bugzilla and JIRA IDs are listed in this document for reference. Bugzilla bugs that are publicly accessible include a link to the ticket. Component Tickets 389-ds-base BZ#1859301 , BZ#1862529 , BZ#1859218 , BZ#1850275 , BZ#1851975 KVM Hypervisor JIRA:RHELPLAN-44450 NetworkManager BZ#1900260, BZ#1878783 , BZ#1766944 , BZ#1912236 OpenIPMI BZ#1796588 SLOF BZ#1910848 accel-config BZ#1843266 anaconda BZ#1890009, BZ#1874394 , BZ#1642391, BZ#1609325, BZ#1854307, BZ#1821192, BZ#1822880 , BZ#1914955 , BZ#1847681, BZ#1903786 , BZ#1931069 , BZ#1954408, BZ#1897657 apr BZ#1819607 authselect BZ#1892761 bcc BZ#1879411 bind BZ#1876492 , BZ#1882040 , BZ#1854148 bpftrace BZ#1879413 clevis BZ#1887836 , BZ#1853651 cloud-init BZ#1886430 , BZ#1750862 , BZ#1957532 , BZ#1963981 cmake BZ#1816874 cockpit BZ#1666722 corosync-qdevice BZ#1784200 corosync BZ#1870449 createrepo_c BZ#1795936 , BZ#1894361 crun BZ#1841438 crypto-policies BZ#1919155 , BZ#1660839 dhcp BZ#1883999 distribution BZ#1877430, BZ#1855776, BZ#1855781, BZ#1657927 dnf BZ#1865803 , BZ#1807446 , BZ#1698145 dwarves BZ#1903566 dyninst BZ#1892001 , BZ#1892007 edk2 BZ#1935497 elfutils BZ#1875318 , BZ#1879758 fapolicyd BZ#1940289 , BZ#1896875 , BZ#1887451 fence-agents BZ#1775847 freeipmi BZ#1861627 freeradius BZ#1723362 gcc BZ#1868446 , BZ#1821994, BZ#1850498, BZ#1656139, BZ#1891998 gdb BZ#1853140 ghostscript BZ#1874523 glibc BZ#1868106 , BZ#1871397 , BZ#1880670 , BZ#1882466, BZ#1871396 , BZ#1893662 , BZ#1817513, BZ#1871385 , BZ#1871387 , BZ#1871395 gnome-shell-extensions BZ#1717947 gnome-software BZ#1668760 gnutls BZ#1628553 go-toolset BZ#1870531 grafana-container BZ#1916154 grafana-pcp BZ#1845592 , BZ#1854093 grafana BZ#1850471 grub2 BZ#1583445 httpd BZ#1869576, BZ#1883648 hwloc BZ#1841354 , BZ#1917560 ima-evm-utils BZ#1868683 ipa BZ#1891056 , BZ#1340463 , BZ#1816784 , BZ#1924707 , BZ#1664719 , BZ#1664718 iproute BZ#1849815 iptraf-ng BZ#1842690, BZ#1906097 jmc BZ#1919283 kernel-rt BZ#1858099 kernel BZ#1806882, BZ#1846838, BZ#1884857, BZ#1876527, BZ#1660290, BZ#1885850, BZ#1649647, BZ#1838876, BZ#1871246, BZ#1893882, BZ#1876519, BZ#1860031, BZ#1844416, BZ#1780258, BZ#1851933, BZ#1885406, BZ#1867490, BZ#1908893, BZ#1919745, BZ#1867910, BZ#1887940, BZ#1874005, BZ#1871214, BZ#1622041, BZ#1533270, BZ#1900674, BZ#1869758, BZ#1861261, BZ#1848427, BZ#1847567, BZ#1844157, BZ#1844111, BZ#1811839, BZ#1877019, BZ#1548297, BZ#1844086, BZ#1839055, BZ#1905088, BZ#1882620, BZ#1784246, BZ#1916583, BZ#1924230, BZ#1793389, BZ#1944639, BZ#1694705, BZ#1748451, BZ#1654962, BZ#1708456, BZ#1812577, BZ#1666538, BZ#1602962, BZ#1609288, BZ#1730502, BZ#1865745, BZ#1868526, BZ#1910358, BZ#1924016, BZ#1906870, BZ#1940674, BZ#1930576, BZ#1907271, BZ#1942888, BZ#1836058, BZ#1934033, BZ#1519039, BZ#1627455, BZ#1501618, BZ#1495358, BZ#1633143, BZ#1570255, BZ#1814836, BZ#1696451, BZ#1348508, BZ#1839311, BZ#1783396, JIRA:RHELPLAN-57712, BZ#1837187, BZ#1904496, BZ#1660337, BZ#1665295, BZ#1569610 kexec-tools BZ#1844941, BZ#1931266, BZ#1854037 kmod-redhat-oracleasm BZ#1827015 kpatch BZ#1798711 krb5 BZ#1877991 libbpf BZ#1919345 libgnome-keyring BZ#1607766 libguestfs BZ#1554735 libmpc BZ#1835193 libpcap BZ#1743650 libpwquality BZ#1537240 libreswan BZ#1891128 , BZ#1372050, BZ#1025061, BZ#1934058 , BZ#1934859 libselinux-python-2.8-module BZ#1666328 libselinux BZ#1879368 libsemanage BZ#1913224 libvirt BZ#1664592, BZ#1332758 , BZ#1528684 libvpd BZ#1844429 llvm-toolset BZ#1892716 lvm2 BZ#1496229, BZ#1768536 mariadb-connector-odbc BZ#1944692 mariadb BZ#1936842 , BZ#1944653 , BZ#1942330 mesa BZ#1886147 micropipenv BZ#1849096 mod_fcgid BZ#1876525 mod_security BZ#1824859 mutter BZ#1886034 mysql-selinux BZ#1895021 net-snmp BZ#1817190 nfs-utils BZ#1592011 nispor BZ#1848817 nmstate BZ#1674456 nss_nis BZ#1803161 nss BZ#1817533 , BZ#1645153 opal-prd BZ#1844427 opencryptoki BZ#1847433 opencv BZ#1886310 openmpi BZ#1866402 opensc BZ#1877973 , BZ#1947025 openscap BZ#1824152 , BZ#1887794 , BZ#1840579 openssl BZ#1810911 osbuild-composer BZ#1951964 oscap-anaconda-addon BZ#1843932 , BZ#1665082, BZ#1674001 , BZ#1691305, BZ#1834716 p11-kit BZ#1887853 pacemaker BZ#1371576 , BZ#1948620 pcp-container BZ#1916155 pcp BZ#1854035 , BZ#1847808 pcs BZ#1869399, BZ#1741056 , BZ#1667066 , BZ#1667061 , BZ#1457314 , BZ#1839637 , BZ#1619620, BZ#1851335 perl-IO-String BZ#1890998 perl-Time-HiRes BZ#1895852 pki-core BZ#1868233 , BZ#1729215 podman BZ#1734854, BZ#1881894 , BZ#1932083 policycoreutils BZ#1868717 , BZ#1926386 popt BZ#1843787 postfix BZ#1688389 , BZ#1711885 powerpc-utils BZ#1853297 py3c BZ#1841060 pyOpenSSL BZ#1629914 pykickstart BZ#1637872 pyodbc BZ#1881490 python-PyMySQL BZ#1820628 python-blivet BZ#1656485 qemu-kvm BZ#1790620, BZ#1719687 , BZ#1860743 , BZ#1740002 , BZ#1651994 quota BZ#1868671 rear BZ#1729499 , BZ#1898080, BZ#1832394 redhat-support-tool BZ#1802026 redis BZ#1862063 resource-agents BZ#1471182 rhel-system-roles BZ#1865990 , BZ#1926947 , BZ#1889484 , BZ#1927943 , BZ#1893712 , BZ#1893743 , BZ#1893906 , BZ#1893908 , BZ#1895188 , BZ#1893696 , BZ#1893699 , BZ#1889893 , BZ#1893961 rpm BZ#1834931 , BZ#1923167 , BZ#1688849 rshim BZ#1744737 rsyslog BZ#1869874 , JIRA:RHELPLAN-10431, BZ#1679512 rust-toolset BZ#1896712 samba BZ#1878109 , JIRA:RHELPLAN-13195, Jira:RHELDOCS-16612 scap-security-guide BZ#1889344 , BZ#1927019 , BZ#1918742 , BZ#1778188 , BZ#1843913 , BZ#1858866 , BZ#1750755 scap-workbench BZ#1877522 selinux-policy BZ#1889673 , BZ#1860443 , BZ#1931848 , BZ#1461914 sendmail BZ#1868041 setroubleshoot BZ#1875290 , BZ#1794807 skopeo BZ#1940854 sos BZ#1966838 spamassassin BZ#1822388 spice BZ#1849563 sssd BZ#1819012 , BZ#1884196 , BZ#1884213 , BZ#1784459 , BZ#1893698 , BZ#1881992 stalld BZ#1875037 stratisd BZ#1798244 , BZ#1868100 subscription-manager BZ#1905398 subversion BZ#1844947 sudo BZ#1786990 swig BZ#1853639 systemd BZ#1827462 systemtap BZ#1875341 tang-container BZ#1913310 tang BZ#1828558 texlive BZ#1889802 tpm2-abrmd BZ#1855177 tuned BZ#1874052 udica BZ#1763210 unbound BZ#1850460 usbguard BZ#1887448 , BZ#1940060 valgrind BZ#1504123, BZ#1937340 virtio-win BZ#1861229 wayland BZ#1673073 xdp-tools BZ#1880268 xfsprogs BZ#1949743 xorg-x11-drv-qxl BZ#1642887 xorg-x11-server BZ#1698565 other BZ#1839151 , BZ#1780124 , JIRA:RHELPLAN-59941, JIRA:RHELPLAN-59938, JIRA:RHELPLAN-59950, BZ#1952421 , JIRA:RHELPLAN-37817, BZ#1918055, JIRA:RHELPLAN-56664, JIRA:RHELPLAN-56661, JIRA:RHELPLAN-39843, BZ#1925192 , JIRA:RHELPLAN-73418, JIRA:RHELPLAN-63081, BZ#1935686, BZ#1634655, JIRA:RHELPLAN-56782, JIRA:RHELPLAN-72660, JIRA:RHELPLAN-72994, JIRA:RHELPLAN-37579, BZ#1952161, BZ#1640697, BZ#1659609, BZ#1687900 , BZ#1697896, JIRA:RHELPLAN-59111, BZ#1757877, BZ#1777138, JIRA:RHELPLAN-27987, JIRA:RHELPLAN-28940, JIRA:RHELPLAN-34199, JIRA:RHELPLAN-57914, BZ#1897383 , BZ#1741436, BZ#1971061 , JIRA:RHELPLAN-58629, BZ#1960412 , BZ#1959020, BZ#1690207, JIRA:RHELPLAN-1212, BZ#1559616, BZ#1889737 , BZ#1812552 , JIRA:RHELPLAN-14047, BZ#1769727 , JIRA:RHELPLAN-27394, JIRA:RHELPLAN-27737, JIRA:RHELPLAN-56659, BZ#1906489 , BZ#1957316 , BZ#1960043 , BZ#1642765, JIRA:RHELPLAN-10304, BZ#1646541, BZ#1647725, BZ#1932222 , BZ#1686057 , BZ#1748980 , JIRA:RHELPLAN-71200, BZ#1827628, JIRA:RHELPLAN-45858, BZ#1871025 , BZ#1871953 , BZ#1874892, BZ#1893767 , BZ#1916296, BZ#1926114 , BZ#1904251, JIRA:RHELPLAN-59825, BZ#1920624 , JIRA:RHELPLAN-70700, BZ#1929173 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/list_of_tickets_by_component |
Chapter 2. Fault tolerant deployments using multiple Prism Elements | Chapter 2. Fault tolerant deployments using multiple Prism Elements By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). To improve the fault tolerance of your OpenShift Container Platform cluster, you can specify that these machines be distributed across multiple Nutanix clusters by configuring failure domains. A failure domain represents an additional Prism Element instance that is available to OpenShift Container Platform machine pools during and after installation. 2.1. Installation method and failure domain configuration The OpenShift Container Platform installation method determines how and when you configure failure domains: If you deploy using installer-provisioned infrastructure, you can configure failure domains in the installation configuration file before deploying the cluster. For more information, see Configuring failure domains . You can also configure failure domains after the cluster is deployed. For more information about configuring failure domains post-installation, see Adding failure domains to an existing Nutanix cluster . If you deploy using infrastructure that you manage (user-provisioned infrastructure) no additional configuration is required. After the cluster is deployed, you can manually distribute control plane and compute machines across failure domains. 2.2. Adding failure domains to an existing Nutanix cluster By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). After an OpenShift Container Platform cluster is deployed, you can improve its fault tolerance by adding additional Prism Element instances to the deployment using failure domains. A failure domain represents a single Prism Element instance where new control plane and compute machines can be deployed and existing control plane and compute machines can be distributed. 2.2.1. Failure domain requirements When planning to use failure domains, consider the following requirements: All Nutanix Prism Element instances must be managed by the same instance of Prism Central. A deployment that is comprised of multiple Prism Central instances is not supported. The machines that make up the Prism Element clusters must reside on the same Ethernet network for failure domains to be able to communicate with each other. A subnet is required in each Prism Element that will be used as a failure domain in the OpenShift Container Platform cluster. When defining these subnets, they must share the same IP address prefix (CIDR) and should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. 2.2.2. Adding failure domains to the Infrastructure CR You add failure domains to an existing Nutanix cluster by modifying its Infrastructure custom resource (CR) ( infrastructures.config.openshift.io ). Tip It is recommended that you configure three failure domains to ensure high-availability. Procedure Edit the Infrastructure CR by running the following command: USD oc edit infrastructures.config.openshift.io cluster Configure the failure domains. Example Infrastructure CR with Nutanix failure domains spec: cloudConfig: key: config name: cloud-provider-config #... platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> # ... where: <uuid> Specifies the universally unique identifier (UUID) of the Prism Element. <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <network_uuid> Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. Save the CR to apply the changes. 2.2.3. Distributing control planes across failure domains You distribute control planes across Nutanix failure domains by modifying the control plane machine set custom resource (CR). Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). The control plane machine set custom resource (CR) is in an active state. For more information on checking the control plane machine set custom resource state, see "Additional resources". Procedure Edit the control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api Configure the control plane machine set to use failure domains by adding a spec.template.machines_v1beta1_machine_openshift_io.failureDomains stanza. Example control plane machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: # ... template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3> # ... Save your changes. By default, the control plane machine set propagates changes to your control plane configuration automatically. If the cluster is configured to use the OnDelete update strategy, you must replace your control planes manually. For more information, see "Additional resources". Additional resources Checking the control plane machine set custom resource state Replacing a control plane machine 2.2.4. Distributing compute machines across failure domains You can distribute compute machines across Nutanix failure domains one of the following ways: Editing existing compute machine sets allows you to distribute compute machines across Nutanix failure domains as a minimal configuration update. Replacing existing compute machine sets ensures that the specification is immutable and all your machines are the same. 2.2.4.1. Editing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by using an existing compute machine set, you update the compute machine set with your configuration and then use scaling to replace the existing compute machines. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m Edit the first compute machine set by running the following command: USD oc edit machineset <machine_set_name_1> -n openshift-machine-api Configure the compute machine set to use the first failure domain by updating the following to the spec.template.spec.providerSpec.value stanza. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Note the value of spec.replicas , because you need it when scaling the compute machine set to apply the changes. Save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=<twice_the_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set is 2 , scale the replicas to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=<original_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set was 2 , scale the replicas to 2 . As required, continue to modify machine sets to reference the additional failure domains that are available to the deployment. Additional resources Modifying a compute machine set 2.2.4.2. Replacing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by replacing a compute machine set, you create a new compute machine set with your configuration, wait for the machines that it creates to start, and then delete the old compute machine set. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m Note the names of the existing compute machine sets. Create a YAML file that contains the values for your new compute machine set custom resource (CR) by using one of the following methods: Copy an existing compute machine set configuration into a new file by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml You can edit this YAML file with your preferred text editor. Create a blank YAML file named <new_machine_set_name_1>.yaml with your preferred text editor and include the required values for your new compute machine set. If you are not sure which value to set for a specific field, you can view values of an existing compute machine set CR by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create machines with a worker or infra role. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Configure the new compute machine set to use the first failure domain by updating or adding the following to the spec.template.spec.providerSpec.value stanza in the <new_machine_set_name_1>.yaml file. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Save your changes. Create a compute machine set CR by running the following command: USD oc create -f <new_machine_set_name_1>.yaml As required, continue to create compute machine sets to reference the additional failure domains that are available to the deployment. List the machines that are managed by the new compute machine sets by running the following command for each new compute machine set: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s When the new machines are in the Running phase, you can delete the old compute machine sets that do not include the failure domain configuration. When you have verified that the new machines are in the Running phase, delete the old compute machine sets by running the following command for each: USD oc delete machineset <original_machine_set_name_1> -n openshift-machine-api Verification To verify that the compute machine sets without the updated configuration are deleted, list the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s To verify that the compute machines without the updated configuration are deleted, list the machines in your cluster by running the following command: USD oc get -n openshift-machine-api machines Example output while deletion is in progress NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h Example output when deletion is complete NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s To verify that a machine created by the new compute machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine <machine_from_new_1> -n openshift-machine-api Additional resources Creating a compute machine set on Nutanix | [
"oc edit infrastructures.config.openshift.io cluster",
"spec: cloudConfig: key: config name: cloud-provider-config # platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid>",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3>",
"oc describe infrastructures.config.openshift.io cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m",
"oc edit machineset <machine_set_name_1> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h",
"oc annotate machine/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=<twice_the_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>",
"oc scale --replicas=<original_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api",
"oc describe infrastructures.config.openshift.io cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m",
"oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml",
"oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>",
"oc create -f <new_machine_set_name_1>.yaml",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s",
"oc delete machineset <original_machine_set_name_1> -n openshift-machine-api",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s",
"oc get -n openshift-machine-api machines",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s",
"oc describe machine <machine_from_new_1> -n openshift-machine-api"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_nutanix/nutanix-failure-domains |
9.6. Random Number Generator (RNG) Device | 9.6. Random Number Generator (RNG) Device virtio-rng is a virtual RNG ( random number generator ) device that feeds RNG data to the guest virtual machine's operating system, thereby providing fresh entropy for guest virtual machines on request. Using an RNG is particularly useful when a device such as a keyboard, mouse and other inputs are not enough to generate sufficient entropy on the guest virtual machine. The virtio-rng device is available for both Red Hat Enterprise Linux and Windows guest virtual machines. Refer to the Note for instructions on installing the Windows requirements. Unless noted, the following descriptions are for both Red Hat Enterprise Linux and Windows guest virtual machines. When virtio-rng is enabled on a Linux guest virtual machine, a chardev is created in the guest virtual machine at the location /dev/hwrng/ . This chardev can then be opened and read to fetch entropy from the host physical machine. In order for guest virtual machines' applications to benefit from using randomness from the virtio-rng device transparently, the input from /dev/hwrng/ must be relayed to the kernel entropy pool in the guest virtual machine. This can be accomplished if the information in this location is coupled with the rgnd daemon (contained within the rng-tools). This coupling results in the entropy to be routed to the guest virtual machine's /dev/random file. The process is done manually in Red Hat Enterprise Linux 6 guest virtual machines. Red Hat Enterprise Linux 6 guest virtual machines are coupled by running the following command: For more assistance, run the man rngd command for an explanation of the command options shown here. For further examples, refer to Procedure 9.11, "Implementing virtio-rng with the command line tools" for configuring the virtio-rng device. Note Windows guest virtual machines require the driver viorng to be installed. Once installed, the virtual RNG device will work using the CNG (crypto generation) API provided by Microsoft. Once the driver is installed, the virtrng device appears in the list of RNG providers. Procedure 9.11. Implementing virtio-rng with the command line tools Shut down the guest virtual machine. In a terminal window, using the virsh edit domain-name command, open the XML file for the desired guest virtual machine. Edit the <devices> element to include the following: ... <devices> <rng model='virtio'> <rate period="2000" bytes="1234"/> <backend model='random'>/dev/random</backend> <source mode='bind' service='1234'> <source mode='connect' host='192.0.2.1' service='1234'> </backend> </rng> </devices> ... | [
"rngd -b -r /dev/hwrng/ -o /dev/random/",
"<devices> <rng model='virtio'> <rate period=\"2000\" bytes=\"1234\"/> <backend model='random'>/dev/random</backend> <source mode='bind' service='1234'> <source mode='connect' host='192.0.2.1' service='1234'> </backend> </rng> </devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-guest_virtual_machine_device_configuration-random_number_generator_device |
Chapter 13. Authentication and Interoperability | Chapter 13. Authentication and Interoperability Manual Backup and Restore Functionality This update introduces the ipa-backup and ipa-restore commands to Identity Management (IdM), which allow users to manually back up their IdM data and restore them in case of a hardware failure. For further information, see the ipa-backup (1) and ipa-restore (1) manual pages or the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Support for Migration from WinSync to Trust This update implements the new ID Views mechanism of user configuration. It enables the migration of Identity Management users from a WinSync synchronization-based architecture used by Active Directory to an infrastructure based on Cross-Realm Trusts. For the details of ID Views and the migration procedure, see the documentation in the Windows Integration Guide . One-Time Password Authentication One of the best ways to increase authentication security is to require two factor authentication (2FA). A very popular option is to use one-time passwords (OTP). This technique began in the proprietary space, but over time some open standards emerged (HOTP: RFC 4226, TOTP: RFC 6238). Identity Management in Red Hat Enterprise Linux 7.1 contains the first implementation of the standard OTP mechanism. For further details, see the documentation in the System-Level Authentication Guide . SSSD Integration for the Common Internet File System A plug-in interface provided by SSSD has been added to configure the way in which the cifs-utils utility conducts the ID-mapping process. As a result, an SSSD client can now access a CIFS share with the same functionality as a client running the Winbind service. For further information, see the documentation in the Windows Integration Guide . Certificate Authority Management Tool The ipa-cacert-manage renew command has been added to the Identity management (IdM) client, which makes it possible to renew the IdM Certification Authority (CA) file. This enables users to smoothly install and set up IdM using a certificate signed by an external CA. For details on this feature, see the ipa-cacert-manage (1) manual page. Increased Access Control Granularity It is now possible to regulate read permissions of specific sections in the Identity Management (IdM) server UI. This allows IdM server administrators to limit the accessibility of privileged content only to chosen users. In addition, authenticated users of the IdM server no longer have read permissions to all of its contents by default. These changes improve the overall security of the IdM server data. Limited Domain Access for Unprivileged Users The domains= option has been added to the pam_sss module, which overrides the domains= option in the /etc/sssd/sssd.conf file. In addition, this update adds the pam_trusted_users option, which allows the user to add a list of numerical UIDs or user names that are trusted by the SSSD daemon, and the pam_public_domains option and a list of domains accessible even for untrusted users. The mentioned additions allow the configuration of systems, where regular users are allowed to access the specified applications, but do not have login rights on the system itself. For additional information on this feature, see the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Automatic data provider configuration The ipa-client-install command now by default configures SSSD as the data provider for the sudo service. This behavior can be disabled by using the --no-sudo option. In addition, the --nisdomain option has been added to specify the NIS domain name for the Identity Management client installation, and the --no_nisdomain option has been added to avoid setting the NIS domain name. If neither of these options are used, the IPA domain is used instead. Use of AD and LDAP sudo Providers The AD provider is a back end used to connect to an Active Directory server. In Red Hat Enterprise Linux 7.1, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the domain section of the sssd.conf file. 32-bit Version of krb5-server and krb5-server-ldap Deprecated The 32-bit version of Kerberos 5 Server is no longer distributed, and the following packages are deprecated since Red Hat Enterprise Linux 7.1: krb5-server.i686 , krb5-server.s390 , krb5-server.ppc , krb5-server-ldap.i686 , krb5-server-ldap.s390 , and krb5-server-ldap.ppc . There is no need to distribute the 32-bit version of krb5-server on Red Hat Enterprise Linux 7, which is supported only on the following architectures: AMD64 and Intel 64 systems ( x86_64 ), 64-bit IBM Power Systems servers ( ppc64 ), and IBM System z ( s390x ). SSSD Leverages GPO Policies to Define HBAC SSSD is now able to use GPO objects stored on an AD server for access control. This enhancement mimics the functionality of Windows clients, allowing to use a single set of access control rules to handle both Windows and Unix machines. In effect, Windows administrators can now use GPOs to control access to Linux clients. Apache Modules for IPA A set of Apache modules has been added to Red Hat Enterprise Linux 7.1 as a Technology Preview. The Apache modules can be used by external applications to achieve tighter interaction with Identity Management beyond simple authentication. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-authentication_and_interoperability |
Chapter 3. Alertmanager [monitoring.coreos.com/v1] | Chapter 3. Alertmanager [monitoring.coreos.com/v1] Description Alertmanager describes an Alertmanager cluster. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 3.1.1. .spec Description Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalPeers array (string) AdditionalPeers allows injecting a set of additional Alertmanagers to peer with to form a highly available cluster. affinity object If specified, the pod's scheduling constraints. alertmanagerConfigMatcherStrategy object The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. alertmanagerConfigNamespaceSelector object Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. alertmanagerConfigSelector object AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. alertmanagerConfiguration object EXPERIMENTAL: alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This field may change in future releases. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in the pod. If the service account has automountServiceAccountToken: true , set the field to false to opt out of automounting API credentials. baseImage string Base image that is used to deploy pods, without tag. Deprecated: use 'image' instead clusterAdvertiseAddress string ClusterAdvertiseAddress is the explicit address to advertise in cluster. Needs to be provided for non RFC1918 [1] (public) addresses. [1] RFC1918: https://tools.ietf.org/html/rfc1918 clusterGossipInterval string Interval between gossip attempts. clusterPeerTimeout string Timeout for cluster peering. clusterPushpullInterval string Interval between pushpull attempts. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/alertmanager/configmaps/<configmap-name> in the 'alertmanager' container. configSecret string ConfigSecret is the name of a Kubernetes Secret in the same namespace as the Alertmanager object, which contains the configuration for this Alertmanager instance. If empty, it defaults to alertmanager-<alertmanager-name> . The Alertmanager configuration should be available under the alertmanager.yaml key. Additional keys from the original secret are copied to the generated secret and mounted into the /etc/alertmanager/config directory in the alertmanager container. If either the secret or the alertmanager.yaml key is missing, the operator provisions a minimal Alertmanager configuration with one empty receiver (effectively dropping alert notifications). containers array Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. externalUrl string The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. forceEnableClusterMode boolean ForceEnableClusterMode ensures Alertmanager does not deactivate the cluster mode when running with a single replica. Use case is e.g. spanning an Alertmanager cluster across Kubernetes clusters with a single replica in each. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Alertmanager is being configured. imagePullPolicy string Image pull policy for the 'alertmanager', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. listenLocal boolean ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. logFormat string Log format for Alertmanager to be configured with. logLevel string Log level for Alertmanager to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. paused boolean If set to true all actions on the underlying managed objects are not goint to be performed, except for delete actions. podMetadata object PodMetadata configures Labels and Annotations which are propagated to the alertmanager pods. portName string Port name used for the pods and governing service. Defaults to web . priorityClassName string Priority class assigned to the Pods replicas integer Size is the expected size of the alertmanager cluster. The controller will eventually make the size of the running cluster equal to the expected size. resources object Define resources requests and limits for single Pods. retention string Time duration Alertmanager shall retain data for. Default is '120h', and must match the regular expression [0-9]+(ms|s|m|h) (milliseconds seconds minutes hours). routePrefix string The route prefix Alertmanager registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . secrets array (string) Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/alertmanager/secrets/<secret-name> in the 'alertmanager' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. sha string SHA of Alertmanager container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. storage object Storage is the definition of how storage will be used by the Alertmanager instances. tag string Tag of Alertmanager container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. version string Version the cluster should be on. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. web object Defines the web command line flags when starting Alertmanager. 3.1.2. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 3.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 3.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 3.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 3.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 3.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 3.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 3.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 3.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.54. .spec.alertmanagerConfigMatcherStrategy Description The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. Type object Property Type Description type string If set to OnNamespace , the operator injects a label matcher matching the namespace of the AlertmanagerConfig object for all its routes and inhibition rules. None will not add any additional matchers other than the ones specified in the AlertmanagerConfig. Default is OnNamespace . 3.1.55. .spec.alertmanagerConfigNamespaceSelector Description Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.56. .spec.alertmanagerConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.57. .spec.alertmanagerConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.58. .spec.alertmanagerConfigSelector Description AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.59. .spec.alertmanagerConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.60. .spec.alertmanagerConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.61. .spec.alertmanagerConfiguration Description EXPERIMENTAL: alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This field may change in future releases. Type object Property Type Description global object Defines the global parameters of the Alertmanager configuration. name string The name of the AlertmanagerConfig resource which is used to generate the Alertmanager configuration. It must be defined in the same namespace as the Alertmanager object. The operator will not enforce a namespace label for routes and inhibition rules. templates array Custom notification templates. templates[] object SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. 3.1.62. .spec.alertmanagerConfiguration.global Description Defines the global parameters of the Alertmanager configuration. Type object Property Type Description httpConfig object HTTP client configuration. opsGenieApiKey object The default OpsGenie API Key. opsGenieApiUrl object The default OpsGenie API URL. pagerdutyUrl string The default Pagerduty URL. resolveTimeout string ResolveTimeout is the default value used by alertmanager if the alert does not include EndsAt, after this time passes it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. slackApiUrl object The default Slack API URL. smtp object Configures global SMTP parameters. 3.1.63. .spec.alertmanagerConfiguration.global.httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.64. .spec.alertmanagerConfiguration.global.httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.65. .spec.alertmanagerConfiguration.global.httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.66. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 3.1.67. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.68. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.69. .spec.alertmanagerConfiguration.global.httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.70. .spec.alertmanagerConfiguration.global.httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 3.1.71. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.72. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.73. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.74. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.75. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.76. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.77. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.78. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.79. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.80. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.81. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.82. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.83. .spec.alertmanagerConfiguration.global.opsGenieApiKey Description The default OpsGenie API Key. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.84. .spec.alertmanagerConfiguration.global.opsGenieApiUrl Description The default OpsGenie API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.85. .spec.alertmanagerConfiguration.global.slackApiUrl Description The default Slack API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.86. .spec.alertmanagerConfiguration.global.smtp Description Configures global SMTP parameters. Type object Property Type Description authIdentity string SMTP Auth using PLAIN authPassword object SMTP Auth using LOGIN and PLAIN. authSecret object SMTP Auth using CRAM-MD5. authUsername string SMTP Auth using CRAM-MD5, LOGIN and PLAIN. If empty, Alertmanager doesn't authenticate to the SMTP server. from string The default SMTP From header field. hello string The default hostname to identify to the SMTP server. requireTLS boolean The default SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. smartHost object The default SMTP smarthost used for sending emails. 3.1.87. .spec.alertmanagerConfiguration.global.smtp.authPassword Description SMTP Auth using LOGIN and PLAIN. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.88. .spec.alertmanagerConfiguration.global.smtp.authSecret Description SMTP Auth using CRAM-MD5. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.89. .spec.alertmanagerConfiguration.global.smtp.smartHost Description The default SMTP smarthost used for sending emails. Type object Required host port Property Type Description host string Defines the host's address, it can be a DNS name or a literal IP address. port string Defines the host's port, it can be a literal port number or a port name. 3.1.90. .spec.alertmanagerConfiguration.templates Description Custom notification templates. Type array 3.1.91. .spec.alertmanagerConfiguration.templates[] Description SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.92. .spec.alertmanagerConfiguration.templates[].configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.93. .spec.alertmanagerConfiguration.templates[].secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.94. .spec.containers Description Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 3.1.95. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.96. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.97. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.98. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.99. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.100. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.101. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.102. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.103. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.104. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.105. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.106. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.107. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.108. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.109. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.110. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.111. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.112. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.113. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.114. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.115. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.116. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.117. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.118. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.119. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.120. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.121. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.122. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.123. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.124. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.125. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.126. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.127. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.128. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.129. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.130. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.131. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.132. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.133. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.134. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.135. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.136. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.137. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.138. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.139. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.140. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.141. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.142. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.143. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.144. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.145. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.146. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.147. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.148. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.149. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.150. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.151. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.152. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.153. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.154. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.155. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.156. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.157. .spec.hostAliases Description Pods' hostAliases configuration Type array 3.1.158. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 3.1.159. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 3.1.160. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.161. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 3.1.162. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.163. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.164. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.165. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.166. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.167. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.168. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.169. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.170. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.171. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.172. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.173. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.174. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.175. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.176. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.177. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.178. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.179. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.180. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.181. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.182. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.183. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.184. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.185. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.186. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.187. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.188. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.189. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.190. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.191. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.192. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.193. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.194. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.195. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.196. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.197. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.198. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.199. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.200. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.201. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.202. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.203. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.204. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.205. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.206. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.207. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.208. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.209. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.210. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.211. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.212. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.213. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.214. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.215. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.216. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.217. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.218. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.219. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.220. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.221. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.222. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.223. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.224. .spec.podMetadata Description PodMetadata configures Labels and Annotations which are propagated to the alertmanager pods. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 3.1.225. .spec.resources Description Define resources requests and limits for single Pods. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.226. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.227. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.228. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.229. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.230. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.231. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 3.1.232. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 3.1.233. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.234. .spec.storage Description Storage is the definition of how storage will be used by the Alertmanager instances. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 3.1.235. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 3.1.236. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 3.1.237. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 3.1.238. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 3.1.239. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.240. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.241. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 3.1.242. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.243. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.244. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.245. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.246. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.247. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.248. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 3.1.249. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 3.1.250. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.251. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.252. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 3.1.253. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.254. .spec.storage.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.255. .spec.storage.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.256. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.257. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.258. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.259. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources integer-or-string allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. resizeStatus string resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. 3.1.260. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 3.1.261. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 3.1.262. .spec.tolerations Description If specified, the pod's tolerations. Type array 3.1.263. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 3.1.264. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 3.1.265. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 3.1.266. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.267. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.268. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.269. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. Type array 3.1.270. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.271. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 3.1.272. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 3.1.273. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 3.1.274. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 3.1.275. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 3.1.276. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 3.1.277. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.278. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 3.1.279. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.280. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.281. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.282. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.283. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 3.1.284. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.285. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.286. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 3.1.287. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.288. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.289. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.290. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 3.1.291. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 3.1.292. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 3.1.293. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 3.1.294. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.295. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.296. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 3.1.297. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.298. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.299. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.300. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.301. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.302. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.303. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 3.1.304. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 3.1.305. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.306. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 3.1.307. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 3.1.308. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 3.1.309. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 3.1.310. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 3.1.311. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 3.1.312. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.313. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 3.1.314. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 3.1.315. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 3.1.316. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 3.1.317. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 3.1.318. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 3.1.319. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 3.1.320. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.321. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.322. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.323. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.324. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 3.1.325. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.326. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.327. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.328. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 3.1.329. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.330. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.331. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 3.1.332. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 3.1.333. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 3.1.334. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.335. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 3.1.336. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.337. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 3.1.338. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.339. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.340. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 3.1.341. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.342. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 3.1.343. .spec.web Description Defines the web command line flags when starting Alertmanager. Type object Property Type Description getConcurrency integer Maximum number of GET requests processed concurrently. This corresponds to the Alertmanager's --web.get-concurrency flag. httpConfig object Defines HTTP parameters for web server. timeout integer Timeout for HTTP requests. This corresponds to the Alertmanager's --web.timeout flag. tlsConfig object Defines the TLS parameters for HTTPS. 3.1.344. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 3.1.345. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 3.1.346. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 3.1.347. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.348. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.349. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.350. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.351. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.352. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.353. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.354. .status Description Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Alertmanager cluster. conditions array The current state of the Alertmanager object. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Alertmanager object (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this Alertmanager object. updatedReplicas integer Total number of non-terminated pods targeted by this Alertmanager object that have the desired version spec. 3.1.355. .status.conditions Description The current state of the Alertmanager object. Type array 3.1.356. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 3.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/alertmanagers GET : list objects of kind Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers DELETE : delete collection of Alertmanager GET : list objects of kind Alertmanager POST : create an Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} DELETE : delete an Alertmanager GET : read the specified Alertmanager PATCH : partially update the specified Alertmanager PUT : replace the specified Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status GET : read status of the specified Alertmanager PATCH : partially update status of the specified Alertmanager PUT : replace status of the specified Alertmanager 3.2.1. /apis/monitoring.coreos.com/v1/alertmanagers Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Alertmanager Table 3.2. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty 3.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Alertmanager Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Alertmanager Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty HTTP method POST Description create an Alertmanager Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body Alertmanager schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 202 - Accepted Alertmanager schema 401 - Unauthorized Empty 3.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the Alertmanager namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Alertmanager Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Alertmanager Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Alertmanager Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Alertmanager Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body Alertmanager schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty 3.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the Alertmanager namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Alertmanager Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Alertmanager Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Alertmanager Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body Alertmanager schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/monitoring_apis/alertmanager-monitoring-coreos-com-v1 |
Chapter 7. Brokers page | Chapter 7. Brokers page The Brokers page shows all the brokers created for a Kafka cluster. For each broker, you can see its status, as well as the distribution of partitions across the brokers, including the number of partition leaders and replicas. The broker status is shown as one of the following: Stable A stable broker is operating normally without significant issues. Unstable An unstable broker may be experiencing issues, such as high resource usage or network problems. If the broker has a rack ID, this is the ID of the rack or datacenter in which the broker resides. Click on the right arrow (>) to a broker name to see more information about the broker, including its hostname and disk usage. Note Consider rebalancing if the distribution is uneven to ensure efficient resource utilization. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_console/con-brokers-page-str |
Private Automation Hub life cycle | Private Automation Hub life cycle Red Hat Ansible Automation Platform 2.3 Maintenance and Updates Statement for Automation Hub Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/private_automation_hub_life_cycle/index |
Chapter 2. January 2025 | Chapter 2. January 2025 2.1. Product-wide Updates 2.1.1. PagerDuty integration Red Hat Hybrid Cloud Console now integrates with PagerDuty. This integration sends events detected by Insights and the Hybrid Cloud Console from the console to PagerDuty. When an issue in the Hybrid Cloud Console occurs, PagerDuty automatically notifies the relevant team members through phone calls, SMS, emails, or push notifications. PagerDuty uses scheduling to ensure that team members only receive alerts during their designated on-call periods. You can create escalation policies to escalate an issue to another team member if the first on-call team member does not respond to the alert. PagerDuty also provides incident management features for a coordinated response to resolve issues. For more information about PagerDuty, see: Blog: Sending alerts to PagerDuty Demo video: Sending alerts to PagerDuty Product documentation: Integrating PagerDuty with the Red Hat Hybrid Cloud Console 2.2. Red Hat Insights for Red Hat Enterprise Linux 2.2.1. General Interactive demos Red Hat Insights now has a set of interactive demos (created in Arcade software) designed to walk you through key Insights use cases. These step-by-step guides make it easier to understand and apply Insights in actual working scenarios with direct examples. Explore them all at Interactive demos for Red Hat Insights . The demos include the following subjects: Save time and money with Red Hat Insights (overview) Building and launching compliant images Enhanced integration of Microsoft Azure and Red Hat Evaluating and remediating for regulatory compliance Evaluating systems for Common Vulnerabilities and Exposures (CVEs) Malware detection Predictive system analytics - Fix issues before they cause you trouble Providing user access to a service Reviewing system inventory Sending alerts to PagerDuty Subscription management basics Published blogs and resources Webinar (on-demand): Empowering Red Hat Partners with Red Hat Insights: Boost Efficiency, Security, and Revenue Blog: Sending alerts to PagerDuty by John Spinks (January 24, 2025) Video: Sending alerts to PagerDuty Blog: Streamline the connectivity between your environment and Red Hat Insights services by McKibbin Brady and Tihomir Hadzhiev (January 29, 2025) Webinar (on-demand): Webinar (on-demand): One Platform. Unlimited Potential (January 30, 2025) (January 30, 2025) Red Hat Insights API Cheat Sheet The Red Hat Insights API Cheat Sheet (v6) has been updated with the latest changes. Changes include the deprecation of Basic Authentication, new examples for exporting inventory, and updated links to Compliance (v2 API), Subscriptions, and API documentation. 2.2.2. Inventory Inventory Fixes and Improvements Several updates and fixes have been implemented to improve Inventory, including: Fixed an issue in which special Unicode characters in a host display_name could cause Inventory export to fail Better integration between Insights and Red Hat Subscription Management (RHSM) to automatically synchronize host deletion Updated table and filter drop-down selectors, based on UX recommendations for improved usability Improved API documentation to better clarify the differences between POST and PATCH for the /groups/<group id>/hosts method. 2.2.3. Advisor New recommendations The Insights Advisor service now detects and recommends solutions for the following issues: GFS2 filesystem hang occurred when handling offset lying in the final block for a given height System runs out of memory due to an infinite loop issue in the ceph-mds RHEL AI is running with unsupported hardware Non-root users cannot run the crontab command when the SUID and SGID bits are both removed from /usr/bin/crontab executable File creation failed with error message when the last AG cannot be allocated to a new inode group Kernel panic will occur on the edge computing system after reboot when the NFSD is running with the NFS filesystem mounted Kernel panic occurs when a memory "Hardware Error" is reported on an edge computing system due to a known bug in running kernel Red Hat has discontinued technical support services as well as software maintenance services for the End-Of-Life RHEL AI Red Hat will discontinue technical support services as well as software maintenance services for End-Of-Life RHEL AI in less than 30 days Red Hat will discontinue technical support services as well as software maintenance services for End-Of-Life RHEL AI in less than 7 days Unable to update bootc when the credential file is empty | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/release_notes/january-2025 |
1.2. The Identity Management Domain | 1.2. The Identity Management Domain The Identity Management (IdM) domain consists of a group of machines that share the same configuration, policies, and identity stores. The shared properties allow the machines within the domain to be aware of each other and operate together. From the perspective of IdM, the domain includes the following types of machines: IdM servers, which work as domain controllers IdM clients, which are enrolled with the servers IdM servers are also IdM clients enrolled with themselves: server machines provide the same functionality as clients. IdM supports Red Hat Enterprise Linux machines as the IdM servers and clients. Note This guide describes using IdM in Linux environments. For more information on integration with Active Directory, see the Windows Integration Guide . 1.2.1. Identity Management Servers The IdM servers act as central repositories for identity and policy information. They also host the services used by domain members. IdM provides a set of management tools to manage all the IdM-associated services centrally: the IdM web UI and command-line utilities. For information on installing IdM servers, see Chapter 2, Installing and Uninstalling an Identity Management Server . To support redundancy and load balancing, the data and configuration can be replicated from one IdM server to another: a replica of the initial server. You can configure servers and their replicas to provide different services to clients. For more details on IdM replicas, see Chapter 4, Installing and Uninstalling Identity Management Replicas . 1.2.1.1. Services Hosted by IdM Servers Most of the following services are not strictly required to be installed on the IdM server. For example, services such as a certificate authority (CA), a DNS server, or a Network Time Protocol (NTP) server can be installed on an external server outside the IdM domain. Kerberos: krb5kdc and kadmin IdM uses the Kerberos protocol to support single sign-on. With Kerberos, users only need to present the correct username and password once and can access IdM services without the system prompting for credentials again. Kerberos is divided into two parts: The krb5kdc service is the Kerberos Authentication service and Key Distribution Center (KDC) daemon. The kadmin service is the Kerberos database administration program. For details on how Kerberos works, see the Using Kerberos in the System-Level Authentication Guide . For information on how to authenticate using Kerberos in IdM, see Section 5.2, "Logging into IdM Using Kerberos" . For information on managing Kerberos in IdM, see Chapter 29, Managing the Kerberos Domain . LDAP directory server: dirsrv The IdM internal LDAP directory server instance stores all IdM information, such as information related to Kerberos, user accounts, host entries, services, policies, DNS, and others. The LDAP directory server instance is based on the same technology as Red Hat Directory Server . However, it is tuned to IdM-specific tasks. Note This guide refers to this component as Directory Server. Certificate Authority: pki-tomcatd The integrated Certificate Authority (CA) is based on the same technology as Red Hat Certificate System . pki is the Command-Line Interface for accessing Certificate System services. For more details on installing an IdM server with different CA configurations, see Section 2.3.2, "Determining What CA Configuration to Use" . Note This guide refers to this component as Certificate System when addressing the implementation and as certificate authority when addressing the services provided by the implementation. For information relating to Red Hat Certificate System, a standalone Red Hat product, see Product Documentation for Red Hat Certificate System . Domain Name System (DNS): named IdM uses DNS for dynamic service discovery. The IdM client installation utility can use information from DNS to automatically configure the client machine. After the client is enrolled in the IdM domain, it uses DNS to locate IdM servers and services within the domain. The BIND (Berkeley Internet Name Domain) implementation of the DNS (Domain Name System) protocols in Red Hat Enterprise Linux includes the named DNS server. named-pkcs11 is a version of the BIND DNS server built with native support for the PKCS#11 cryptographic standard. For more information about service discovery, see the Configuring DNS Service Discovery in the System-Level Authentication Guide . For more information on the DNS server, see BIND in the Red Hat Enterprise Linux Networking Guide . For information on using DNS with IdM and important prerequisites, see Section 2.1.5, "Host Name and DNS Configuration" . For details on installing an IdM server with or without integrated DNS, see Section 2.3.1, "Determining Whether to Use Integrated DNS" . Network Time Protocol: ntpd Many services require that servers and clients have the same system time, within a certain variance. For example, Kerberos tickets use time stamps to determine their validity and to prevent replay attacks. If the times between the server and client skew outside the allowed range, the Kerberos tickets are invalidated. By default, IdM uses the Network Time Protocol (NTP) to synchronize clocks over a network via the ntpd service. With NTP, a central server acts as an authoritative clock and the clients synchronize their times to match the server clock. The IdM server is configured as the NTP server for the IdM domain during the server installation process. Note Running an NTP server on an IdM server installed on a virtual machine can lead to inaccurate time synchronization in some environments. To avoid potential problems, do not run NTP on IdM servers installed on virtual machines. For more information on the reliability of an NTP server on a virtual machine, see this Knowledgebase solution . Apache HTTP Server: httpd The Apache HTTP web server provides the IdM Web UI, and also manages communication between the Certificate Authority and other IdM services. For more information, see The Apache HTTP Server in the System Administrator's Guide . Samba / Winbind: smb , winbind Samba implements the Server Message Block (SMB) protocol, also known as the Common Internet File System (CIFS) protocol), in Red Hat Enterprise Linux. Via the smb service, the SMB protocol enables you to access resources on a server, such as file shares and shared printers. If you have configured a Trust with an Active Directory (AD) environment, the Winbind service manages communication between IdM servers and AD servers. For more information, see Samba in the System Administrator's Guide . For more information, see the Winbind in the System-Level Authentication Guide One-time password (OTP) authentication: ipa-otpd One-time passwords (OTP) are passwords that are generated by an authentication token for only one session, as part of two-factor authentication. OTP authentication is implemented in Red Hat Enterprise Linux via the ipa-otpd service. For more information about OTP authentication, see Section 22.3, "One-Time Passwords" . Custodia: ipa-custodia Custodia is a Secrets Services provider, it stores and shares access to secret material such as passwords, keys, tokens, certificates. OpenDNSSEC: ipa-dnskeysyncd OpenDNSSEC is a DNS manager that automates the process of keeping track of DNS security extensions (DNSSEC) keys and the signing of zones. The ipa-dnskeysyncd servuce manages synchronization between the IdM Directory Server and OpenDNSSEC. Figure 1.1. The Identity Management Server: Unifying Services 1.2.2. Identity Management Clients IdM clients are machines configured to operate within the IdM domain. They interact with the IdM servers to access domain resources. For example, they belong to the Kerberos domains configured on the servers, receive certificates and tickets issued by the servers, and use other centralized services for authentication and authorization. An IdM client does not require dedicated client software to interact as a part of the domain. It only requires proper system configuration of certain services and libraries, such as Kerberos or DNS. This configuration directs the client machine to use IdM services. For information on installing IdM clients, see Chapter 3, Installing and Uninstalling Identity Management Clients . 1.2.2.1. Services Hosted by IdM Clients System Security Services Daemon: sssd The System Security Services Daemon (SSSD) is the client-side application that manages user authentication and caching credentials. Caching enables the local system to continue normal authentication operations if the IdM server becomes unavailable or if the client goes offline. For more information, see Configuring SSSD in the System-Level Authentication Guide . SSSD also supports Windows Active Directory (AD). For more information about using SSSD with AD, see the Using Active Directory as an Identity Provider for SSSD in the Windows Integration Guide . certmonger The certmonger service monitors and renews the certificates on the client. It can request new certificates for the services on the system. For more information, see Working with certmonger in the System-Level Authentication Guide . Figure 1.2. Interactions Between IdM Services | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/idm-domain |
3.2. Upgrading a Remote Database Environment from Red Hat Virtualization 4.2 to 4.3 | 3.2. Upgrading a Remote Database Environment from Red Hat Virtualization 4.2 to 4.3 Upgrading your environment from 4.2 to 4.3 involves the following steps: Make sure you meet the prerequisites, including enabling the correct repositories Use the Log Collection Analysis tool and Image Discrepancies tool to check for issues that might prevent a successful upgrade Update the 4.2 Manager to the latest version of 4.2 Upgrade the database from PostgreSQL 9.5 to 10.0 Upgrade the Manager from 4.2 to 4.3 Update the hosts Update the compatibility version of the clusters Reboot any running or suspended virtual machines to update their configuration Update the compatibility version of the data centers If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must replace the certificates now . 3.2.1. Prerequisites Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes. Ensure your environment meets the requirements for Red Hat Virtualization 4.4. For a complete list of prerequisites, see the Planning and Prerequisites Guide . When upgrading Red Hat Virtualization Manager, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure. 3.2.2. Analyzing the Environment It is recommended to run the Log Collection Analysis tool and the Image Discrepancies tool prior to performing updates and for troubleshooting. These tools analyze your environment for known issues that might prevent you from performing an update, and provide recommendations to resolve them. 3.2.3. Log Collection Analysis tool Run the Log Collection Analysis tool prior to performing updates and for troubleshooting. The tool analyzes your environment for known issues that might prevent you from performing an update, and provides recommendations to resolve them. The tool gathers detailed information about your system and presents it as an HTML file. Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure Install the Log Collection Analysis tool on the Manager machine: Run the tool: A detailed report is displayed. By default, the report is saved to a file called analyzer_report.html . To save the file to a specific location, use the --html flag and specify the location: # rhv-log-collector-analyzer --live --html=/ directory / filename .html You can use the ELinks text mode web browser to read the analyzer reports within the terminal. To install the ELinks browser: Launch ELinks and open analyzer_report.html . To navigate the report, use the following commands in ELinks: Insert to scroll up Delete to scroll down PageUp to page up PageDown to page down Left Bracket to scroll left Right Bracket to scroll right 3.2.3.1. Monitoring snapshot health with the image discrepancies tool The RHV Image Discrepancies tool analyzes image data in the Storage Domain and RHV Database. It alerts you if it finds discrepancies in volumes and volume attributes, but does not fix those discrepancies. Use this tool in a variety of scenarios, such as: Before upgrading versions, to avoid carrying over broken volumes or chains to the new version. Following a failed storage operation, to detect volumes or attributes in a bad state. After restoring the RHV database or storage from backup. Periodically, to detect potential problems before they worsen. To analyze a snapshot- or live storage migration-related issues, and to verify system health after fixing these types of problems. Prerequisites Required Versions: this tool was introduced in RHV version 4.3.8 with rhv-log-collector-analyzer-0.2.15-0.el7ev . Because data collection runs simultaneously at different places and is not atomic, stop all activity in the environment that can modify the storage domains. That is, do not create or remove snapshots, edit, move, create, or remove disks. Otherwise, false detection of inconsistencies may occur. Virtual Machines can remain running normally during the process. Procedure To run the tool, enter the following command on the RHV Manager: If the tool finds discrepancies, rerun it to confirm the results, especially if there is a chance some operations were performed while the tool was running. Note This tool includes any Export and ISO storage domains and may report discrepancies for them. If so, these can be ignored, as these storage domains do not have entries for images in the RHV database. Understanding the results The tool reports the following: If there are volumes that appear on the storage but are not in the database, or appear in the database but are not on the storage. If some volume attributes differ between the storage and the database. Sample output: You can now update the Manager to the latest version of 4.2. 3.2.4. Updating the Red Hat Virtualization Manager Prerequisites Ensure the Manager has the correct repositories enabled . For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the update. 3.2.5. Upgrading remote databases from PostgreSQL 9.5 to 10 Red Hat Virtualization 4.3 uses PostgreSQL 10 instead of PostgreSQL 9.5. If your databases are installed locally, the upgrade script automatically upgrades them from version 9.5 to 10. However, if either of your databases (Manager or Data Warehouse) is installed on a separate machine, you must perform the following procedure on each remote database before upgrading the Manager. Stop the service running on the machine: When upgrading the Manager database, stop the ovirt-engine service on the Manager machine: # systemctl stop ovirt-engine When upgrading the Data Warehouse database, stop the ovirt-engine-dwhd service on the Data Warehouse machine: # systemctl stop ovirt-engine-dwhd Enable the required repository to receive the PostgreSQL 10 package: Enable either the Red Hat Virtualization Manager repository: # subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms or the SCL repository: # subscription-manager repos --enable rhel-server-rhscl-7-rpms Install the PostgreSQL 10 packages: Stop and disable the PostgreSQL 9.5 service: Upgrade the PostgreSQL 9.5 database to PostgreSQL 10: Start and enable the rh-postgresql10-postgresql.service and check that it is running: Ensure that you see output similar to the following: Copy the pg_hba.conf client configuration file from the PostgreSQL 9.5 environment to the PostgreSQL 10 environment: # cp -p /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf Update the following parameters in /var/opt/rh/rh-postgresql10/lib/pgsql/data/postgresql.conf : listen_addresses='*' autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem = 8192 Restart the PostgreSQL 10 service to apply the configuration changes: You can now upgrade the Manager to 4.3. 3.2.6. Upgrading the Red Hat Virtualization Manager from 4.2 to 4.3 Follow these same steps when upgrading any of the following: the Red Hat Virtualization Manager a remote machine with the Data Warehouse service You need to be logged into the machine that you are upgrading. Important If the upgrade fails, the engine-setup command attempts to restore your Red Hat Virtualization Manager installation to its state. For this reason, do not remove the version's repositories until after the upgrade is complete. If the upgrade fails, the engine-setup script explains how to restore your installation. Procedure Enable the Red Hat Virtualization 4.3 repositories: # subscription-manager repos \ --enable=rhel-7-server-rhv-4.3-manager-rpms \ --enable=jb-eap-7.2-for-rhel-7-server-rpms All other repositories remain the same across Red Hat Virtualization releases. Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Run engine-setup and follow the prompts to upgrade the Red Hat Virtualization Manager, the remote database or remote service: # engine-setup Note During the upgrade process for the Manager, the engine-setup script might prompt you to disconnect the remote Data Warehouse database. You must disconnect it to continue the setup. When the script completes successfully, the following message appears: Execution of setup completed successfully Disable the Red Hat Virtualization 4.2 repositories to ensure the system does not use any 4.2 packages: # subscription-manager repos \ --disable=rhel-7-server-rhv-4.2-manager-rpms \ --disable=jb-eap-7-for-rhel-7-server-rpms Update the base operating system: # yum update Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the upgrade. The Manager is now upgraded to version 4.3. 3.2.6.1. Completing the remote Data Warehouse database upgrade Complete these additional steps when upgrading a remote Data Warehouse database from PostgreSQL 9.5 to 10. Procedure The ovirt-engine-dwhd service is now running on the Manager machine. If the ovirt-engine-dwhd service is on a remote machine, stop and disable the ovirt-engine-dwhd service on the Manager machine, and remove the configuration files that engine-setup created: # systemctl stop ovirt-engine-dwhd # systemctl disable ovirt-engine-dwhd # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/* Repeat the steps in Upgrading the Manager to 4.3 on the machine hosting the ovirt-engine-dwhd service. You can now update the hosts. 3.2.7. Updating All Hosts in a Cluster You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of Red Hat Virtualization. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates. Update one cluster at a time. Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead. Procedure In the Administration Portal, click Compute Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster. Click Upgrade . Select the hosts to update, then click . Configure the options: Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update. Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60 . You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly. Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Manager to check for host updates less frequently than the default. Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot. Use Maintenance Policy sets the cluster's scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option. Click . Review the summary of the hosts and virtual machines that are affected. Click Upgrade . A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process. You can track the progress of host updates: in the Compute Clusters view, the Upgrade Status column displays a progress bar that displays the percentage of completion. in the Compute Hosts view in the Events section of the Notification Drawer ( ). You can track the progress of individual virtual machine migrations in the Status column of the Compute Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines. 3.2.8. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Prerequisites To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Limitations Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection. If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. 3.2.9. Changing Virtual Machine Cluster Compatibility After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes. Procedure In the Administration Portal, click Compute Virtual Machines . Check which virtual machines require a reboot. In the Vms: search bar, enter the following query: next_run_config_exists=True The search results show all virtual machines with pending changes. Select each virtual machine and click Restart . Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself. When the virtual machine starts, the new compatibility version is automatically applied. Note You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. 3.2.10. Changing the Data Center Compatibility Version Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level. Prerequisites To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center. Procedure In the Administration Portal, click Compute Data Centers . Select the data center to change and click Edit . Change the Compatibility Version to the desired value. Click OK . The Change Data Center Compatibility Version confirmation dialog opens. Click OK to confirm. If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must do so now. 3.2.11. Replacing SHA-1 Certificates with SHA-256 Certificates Red Hat Virtualization 4.4 uses SHA-256 signatures, which provide a more secure way to sign SSL certificates than SHA-1. Newly installed systems do not require any special steps to enable Red Hat Virtualization's public key infrastructure (PKI) to use SHA-256 signatures. Warning Do NOT let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error prone and time consuming process. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide . Preventing Warning Messages from Appearing in the Browser Log in to the Manager machine as the root user. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256 : # cat /etc/pki/ovirt-engine/openssl.conf If it still includes default_md = sha1 , back up the existing configuration and change the default to sha256 : # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."USD(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf Define the certificate that should be re-signed: # names="apache" On the Manager, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates: # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in USDnames; do subject="USD( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"USD{name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="USD{name}" \ --password=mypass \ <1> --subject="USD{subject}" \ --san=DNS:"USD{ENGINE_FQDN}" \ --keep-key done Do not change this the password value. Restart the httpd service: # systemctl restart httpd Connect to the Administration Portal to confirm that the warning no longer appears. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority's certificate, navigate to http:// your-manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing your-manager-fqdn with the fully qualified domain name (FQDN). Replacing All Signed Certificates with SHA-256 Log in to the Manager machine as the root user. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256 : # cat /etc/pki/ovirt-engine/openssl.conf If it still includes default_md = sha1 , back up the existing configuration and change the default to sha256 : # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."USD(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf Re-sign the CA certificate by backing it up and creating a new certificate in ca.pem.new : # cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem."USD(date +"%Y%m%d%H%M%S")" # openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256 Replace the existing certificate with the new certificate: # mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem Define the certificates that should be re-signed: # names="engine apache websocket-proxy jboss imageio-proxy" If you replaced the Red Hat Virtualization Manager SSL Certificate after the upgrade, run the following instead: # names="engine websocket-proxy jboss imageio-proxy" For more details see Replacing the Red Hat Virtualization Manager CA Certificate in the Administration Guide . On the Manager, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates: # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in USDnames; do subject="USD( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"USD{name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="USD{name}" \ --password=mypass \ <1> --subject="USD{subject}" \ --san=DNS:"USD{ENGINE_FQDN}" \ --keep-key done Do not change this the password value. Restart the following services: # systemctl restart httpd # systemctl restart ovirt-engine # systemctl restart ovirt-websocket-proxy # systemctl restart ovirt-imageio Connect to the Administration Portal to confirm that the warning no longer appears. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority's certificate, navigate to http:// your-manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing your-manager-fqdn with the fully qualified domain name (FQDN). Enroll the certificates on the hosts. Repeat the following procedure for each host. In the Administration Portal, click Compute Hosts . Select the host and click Management Maintenance and OK . Once the host is in maintenance mode, click Installation Enroll Certificate . Click Management Activate . | [
"yum install rhv-log-collector-analyzer",
"rhv-log-collector-analyzer --live",
"rhv-log-collector-analyzer --live --html=/ directory / filename .html",
"yum install -y elinks",
"elinks /home/user1/analyzer_report.html",
"rhv-image-discrepancies",
"Checking storage domain c277ad93-0973-43d9-a0ca-22199bc8e801 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes image ef325650-4b39-43cf-9e00-62b9f7659020 has a different attribute capacity on storage(2696984576) and on DB(2696986624) image 852613ce-79ee-4adc-a56a-ea650dcb4cfa has a different attribute capacity on storage(5424252928) and on DB(5424254976) Checking storage domain c64637b4-f0e8-408c-b8af-6a52946113e2 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes No discrepancies found",
"engine-upgrade-check",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"yum update --nobest",
"systemctl stop ovirt-engine",
"systemctl stop ovirt-engine-dwhd",
"subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms",
"subscription-manager repos --enable rhel-server-rhscl-7-rpms",
"yum install rh-postgresql10 rh-postgresql10-postgresql-contrib",
"systemctl stop rh-postgresql95-postgresql systemctl disable rh-postgresql95-postgresql",
"scl enable rh-postgresql10 -- postgresql-setup --upgrade-from=rh-postgresql95-postgresql --upgrade",
"systemctl start rh-postgresql10-postgresql.service systemctl enable rh-postgresql10-postgresql.service systemctl status rh-postgresql10-postgresql.service",
"rh-postgresql10-postgresql.service - PostgreSQL database server Loaded: loaded (/usr/lib/systemd/system/rh-postgresql10-postgresql.service; enabled; vendor preset: disabled) Active: active (running) since",
"cp -p /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf",
"listen_addresses='*' autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem = 8192",
"systemctl restart rh-postgresql10-postgresql.service",
"subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"subscription-manager repos --disable=rhel-7-server-rhv-4.2-manager-rpms --disable=jb-eap-7-for-rhel-7-server-rpms",
"yum update",
"systemctl stop ovirt-engine-dwhd systemctl disable ovirt-engine-dwhd rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*",
"next_run_config_exists=True",
"cat /etc/pki/ovirt-engine/openssl.conf",
"cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf.\"USD(date +\"%Y%m%d%H%M%S\")\" sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf",
"names=\"apache\"",
". /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf for name in USDnames; do subject=\"USD( openssl x509 -in /etc/pki/ovirt-engine/certs/\"USD{name}\".cer -noout -subject -nameopt compat | sed 's;subject=\\(.*\\);\\1;' )\" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=\"USD{name}\" --password=mypass \\ <1> --subject=\"USD{subject}\" --san=DNS:\"USD{ENGINE_FQDN}\" --keep-key done",
"systemctl restart httpd",
"cat /etc/pki/ovirt-engine/openssl.conf",
"cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf.\"USD(date +\"%Y%m%d%H%M%S\")\" sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf",
"cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem.\"USD(date +\"%Y%m%d%H%M%S\")\" openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256",
"mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem",
"names=\"engine apache websocket-proxy jboss imageio-proxy\"",
"names=\"engine websocket-proxy jboss imageio-proxy\"",
". /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf for name in USDnames; do subject=\"USD( openssl x509 -in /etc/pki/ovirt-engine/certs/\"USD{name}\".cer -noout -subject -nameopt compat | sed 's;subject=\\(.*\\);\\1;' )\" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=\"USD{name}\" --password=mypass \\ <1> --subject=\"USD{subject}\" --san=DNS:\"USD{ENGINE_FQDN}\" --keep-key done",
"systemctl restart httpd systemctl restart ovirt-engine systemctl restart ovirt-websocket-proxy systemctl restart ovirt-imageio"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/upgrade_guide/remote_upgrading_from_4-2 |
Chapter 2. Before you begin | Chapter 2. Before you begin Initial setup Create an OpenShift instance. For more details on how to create an OpenShift instance, see OpenShift container platform installation overview . Version compatibility and support OpenShift Container Platform versions 3.11, 4.7, and above 4.7 support the S2I for OpenShift image. For details about the current support levels for OpenShift Container Platform, see Red Hat OpenShift Container Platform Life Cycle Policy and Red Hat OpenShift Container Platform Life Cycle Policy (non-current versions) . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_11/openjdk-prereq-s2i-openshift |
Chapter 5. Using the API | Chapter 5. Using the API For more information, see the AMQ Python API reference and AMQ Python example suite . 5.1. Handling messaging events AMQ Python is an asynchronous event-driven API. To define how an application handles events, the user implements callback methods on the MessagingHandler class. These methods are then called as network activity or timers trigger new events. Example: Handling messaging events class ExampleHandler(MessagingHandler): def on_start(self, event): print("The container event loop has started") def on_sendable(self, event): print("A message can be sent") def on_message(self, event): print("A message is received") These are only a few common-case events. The full set is documented in the API reference . 5.2. Accessing event-related objects The event argument has attributes for accessing the object the event is regarding. For example, the on_connection_opened event sets the event connection attribute. In addition to the primary object for the event, all objects that form the context for the event are set as well. Attributes with no relevance to a particular event are null. Example: Accessing event-related objects event. container event. connection event. session event. sender event. receiver event. delivery event. message 5.3. Creating a container The container is the top-level API object. It is the entry point for creating connections, and it is responsible for running the main event loop. It is often constructed with a global event handler. Example: Creating a container handler = ExampleHandler() container = Container(handler) container.run() 5.4. Setting the container identity Each container instance has a unique identity called the container ID. When AMQ Python makes a connection, it sends the container ID to the remote peer. To set the container ID, pass it to the Container constructor. Example: Setting the container identity container = Container(handler) container.container_id = "job-processor-3" If the user does not set the ID, the library will generate a UUID when the container is constucted. | [
"class ExampleHandler(MessagingHandler): def on_start(self, event): print(\"The container event loop has started\") def on_sendable(self, event): print(\"A message can be sent\") def on_message(self, event): print(\"A message is received\")",
"event. container event. connection event. session event. sender event. receiver event. delivery event. message",
"handler = ExampleHandler() container = Container(handler) container.run()",
"container = Container(handler) container.container_id = \"job-processor-3\""
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_python_client/using_the_api |
Chapter 13. Host status in Satellite | Chapter 13. Host status in Satellite In Satellite, each host has a global status that indicates which hosts need attention. Each host also has sub-statuses that represent status of a particular feature. With any change of a sub-status, the global status is recalculated and the result is determined by statuses of all sub-statuses. 13.1. Host global status overview The global status represents the overall status of a particular host. The status can have one of three possible values: OK , Warning , or Error . You can find global status on the Hosts Overview page. The status displays a small icon to host name and has a color that corresponds with the status. Hovering over the icon renders a tooltip with sub-status information to quickly find out more details. To view the global status for a host, in the Satellite web UI, navigate to Hosts > All Hosts . OK No errors were reported by any sub-status. This status is highlighted with the color green. Warning While no error was detected, some sub-status raised a warning. For example, there are no configuration management reports for the host even though the host is configured to send reports. It is a good practice to investigate any warnings to ensure that your deployment remains healthy. This status is highlighted with the color yellow. Error Some sub-status reports a failure. For example, a run contains some failed resources. This status is highlighted with the color red. Search syntax If you want to search for hosts according to their status, use the syntax for searching in Satellite that is outlined in the Searching and Bookmarking in Administering Red Hat Satellite , and then build your searches out using the following status-related examples: To search for hosts that have an OK status: To search for all hosts that deserve attention: 13.2. Host sub-status overview A sub-status monitors only a part of a host's capabilities. To view the sub-statuses of a host, in the Satellite web UI, navigate to Hosts > All Hosts and click on the host whose full status you want to inspect. You can view the global host status to the name of the host and the host sub-statuses on the Host status card. Each sub-status has its own set of possible values that are mapped to the three global status values. Below are listed sub-statuses that Satellite contains. Configuration This sub-status is only relevant if Satellite uses a configuration management system like Ansible, Puppet, or Salt. Possible values: Label Global host status Alerts disabled OK Active OK Pending OK No changes OK No reports OK / Warning Out of sync Warning Error Error Additional information about the values of this sub-status: Active : During the last configuration, some resources were applied. Pending : During the last configuration, some resources would be applied but your configuration management integration was configured to run in noop mode. No changes : During the last configuration, nothing changed. No reports : This can be both a Warning or OK status. When there are no reports but the host uses an associated Capsule for configuration management or the always_show_configuration_status setting is set to true , it maps to Warning . Otherwise it maps to OK . Error : This indicates an error during configuration. For example, a configuration run failed to install a package. Out of sync : A configuration report was not received within the expected interval, based on the outofsync_interval setting. Reports are identified by an origin and can have different intervals based upon it. Build This sub-status is only relevant for hosts provisioned from Satellite or hosts registered through global registration. Possible values: Label Global host status Number value Installed OK 0 Pending installation OK 1 Token expired Error 2 Installation error Error 3 Compliance Indicates if the host is compliant with OpenSCAP policies. Possible values: Label Global host status Number value Compliant OK 0 Inconclusive Warning 1 At least one incompliant Error 2 OVAL scan Indicates if there are any vulnerabilities found on the host Possible values: Label Global host status Number value No vulnerabilities found OK 0 Vulnerabilities found Warning 1 Vulnerabilities with available patch found Error 2 Execution Status of the last completed remote execution job. Possible values: Label Global host status Number value Last execution succeeded / No execution finished yet OK 0 Last execution failed Error 1 Unknown execution status OK 2 or 3 Last execution cancelled OK 4 Inventory Indicates if the host is synchronized to Red Hat Hybrid Cloud Console. Satellite Server performs the synchronization itself but only uploads basic information to Red Hat Hybrid Cloud Console. Possible values: Label Global host status Number value Host was not uploaded to your RH cloud inventory Warning 0 Successfully uploaded to your RH cloud inventory OK 1 Insights Indicates if the host is synchronized to Red Hat Hybrid Cloud Console. This synchronization is performed by the host. The host uploads more information than the Satellite Server. Possible values: Label Global host status Number value Reporting OK 0 Not reporting Error 1 Errata Indicates if Errata is available on the host. Possible values: Label Global host status Number value Up to date OK 0 Unknown Warning 1 Needed errata Error 2 Needed security errata Error 3 Subscription Indicates if the host has a valid RHEL subscription. Possible values: Label Global host status Number value Fully entitled OK 0 Partially entitled Warning 1 Unentitled Error 2 Unknown Warning 3 Unsubscribed hypervisor Warning 4 SCA enabled OK 5 Service level Indicates if a subscription matching your specified Service level syspurpose value can be attached. Possible values: Label Global host status Number value Unknown OK 0 Mismatched Warning 1 Matched OK 2 Not specified OK 3 Role Indicates if a subscription matching your specified Role syspurpose value can be attached. Possible values: Label Global host status Number value Unknown OK 0 Mismatched Warning 1 Matched OK 2 Not specified OK 3 Usage Indicates if a subscription matching your specified Usage syspurpose value can be attached. Possible values: Label Global host status Number value Unknown OK 0 Mismatched Warning 1 Matched OK 2 Not specified OK 3 Addons Indicates if a subscription matching your specified Addons syspurpose value can be attached. Possible values: Label Global host status Number value Unknown OK 0 Mismatched Warning 1 Matched OK 2 Not specified OK 3 System purpose Indicates if a subscription matching your specified syspurpose values can be attached. Possible values: Label Global host status Number value Unknown OK 0 Mismatched Warning 1 Matched OK 2 Not specified OK 3 RHEL Lifecycle Indicates the current state of the Red Hat Enterprise Linux operating system installed on the host. Possible values: Label Global host status Number value Unknown OK 0 Full support OK 1 Maintenance support OK 2 Approaching end of maintenance support Warning 3 Extended support OK 4 Approaching end of support Warning 5 Support ended Error 6 Traces Indicates if the host needs a reboot or a process restart. Possible values: Label Global host status Number value Unknown Warning -1 Up to date OK 0 Required process restart Error 1 Required reboot Error 2 Search syntax If you want to search for hosts according to their sub-status, use the syntax for searching in Satellite that is outlined in the Searching and Bookmarking chapter of the Administering Satellite guide, and then build your searches out using the following status-related examples: You search for hosts' configuration sub-statuses based on their last reported state. For example, to find hosts that have at least one pending resource: To find hosts that restarted some service during last run: To find hosts that have an interesting last run that might indicate something has happened: | [
"global_status = ok",
"global_status = error or global_status = warning",
"status.pending > 0",
"status.restarted > 0",
"status.interesting = true"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/host_status_managing-hosts |
Chapter 8. Installing a cluster on Azure into an existing VNet | Chapter 8. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.13, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 8.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.13, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 8.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 8.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 8.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 8.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin 8.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 8.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.4. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 8.5. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.6. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 8.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 8.7. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.8. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.6.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 8.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 8.6.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 8.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 8.6.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 1 10 13 19 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 If you use an existing VNet, specify the name of the resource group that contains it. 16 If you use an existing VNet, specify its name. 17 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 18 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 8.6.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 8.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/installing-azure-vnet |
Chapter 12. ImageDigestMirrorSet [config.openshift.io/v1] | Chapter 12. ImageDigestMirrorSet [config.openshift.io/v1] Description ImageDigestMirrorSet holds cluster-wide information about how to handle registry mirror rules on using digest pull specification. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status contains the observed state of the resource. 12.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description imageDigestMirrors array imageDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using tag specification, users should configure a list of mirrors using "ImageTagMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagedigestmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a specific order of mirrors, should configure them into one list of mirrors using the expected order. imageDigestMirrors[] object ImageDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. 12.1.2. .spec.imageDigestMirrors Description imageDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using tag specification, users should configure a list of mirrors using "ImageTagMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagedigestmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a specific order of mirrors, should configure them into one list of mirrors using the expected order. Type array 12.1.3. .spec.imageDigestMirrors[] Description ImageDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description mirrorSourcePolicy string mirrorSourcePolicy defines the fallback policy if fails to pull image from the mirrors. If unset, the image will continue to be pulled from the the repository in the pull spec. sourcePolicy is valid configuration only when one or more mirrors are in the mirror list. mirrors array (string) mirrors is zero or more locations that may also contain the same images. No mirror will be configured if not specified. Images can be pulled from these mirrors only if they are referenced by their digests. The mirrored location is obtained by replacing the part of the input reference that matches source by the mirrors entry, e.g. for registry.redhat.io/product/repo reference, a (source, mirror) pair *.redhat.io, mirror.local/redhat causes a mirror.local/redhat/product/repo repository to be used. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. If no mirror is specified or all image pulls from the mirror list fail, the image will continue to be pulled from the repository in the pull spec unless explicitly prohibited by "mirrorSourcePolicy" Other cluster configuration, including (but not limited to) other imageDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. "mirrors" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table source string source matches the repository that users refer to, e.g. in image pull specifications. Setting source to a registry hostname e.g. docker.io. quay.io, or registry.redhat.io, will match the image pull specification of corressponding registry. "source" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo [*.]host for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table 12.1.4. .status Description status contains the observed state of the resource. Type object 12.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagedigestmirrorsets DELETE : delete collection of ImageDigestMirrorSet GET : list objects of kind ImageDigestMirrorSet POST : create an ImageDigestMirrorSet /apis/config.openshift.io/v1/imagedigestmirrorsets/{name} DELETE : delete an ImageDigestMirrorSet GET : read the specified ImageDigestMirrorSet PATCH : partially update the specified ImageDigestMirrorSet PUT : replace the specified ImageDigestMirrorSet /apis/config.openshift.io/v1/imagedigestmirrorsets/{name}/status GET : read status of the specified ImageDigestMirrorSet PATCH : partially update status of the specified ImageDigestMirrorSet PUT : replace status of the specified ImageDigestMirrorSet 12.2.1. /apis/config.openshift.io/v1/imagedigestmirrorsets HTTP method DELETE Description delete collection of ImageDigestMirrorSet Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageDigestMirrorSet Table 12.2. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSetList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageDigestMirrorSet Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 202 - Accepted ImageDigestMirrorSet schema 401 - Unauthorized Empty 12.2.2. /apis/config.openshift.io/v1/imagedigestmirrorsets/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the ImageDigestMirrorSet HTTP method DELETE Description delete an ImageDigestMirrorSet Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageDigestMirrorSet Table 12.9. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageDigestMirrorSet Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageDigestMirrorSet Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 401 - Unauthorized Empty 12.2.3. /apis/config.openshift.io/v1/imagedigestmirrorsets/{name}/status Table 12.15. Global path parameters Parameter Type Description name string name of the ImageDigestMirrorSet HTTP method GET Description read status of the specified ImageDigestMirrorSet Table 12.16. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageDigestMirrorSet Table 12.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.18. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageDigestMirrorSet Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/imagedigestmirrorset-config-openshift-io-v1 |
Installing and using Red Hat build of OpenJDK 11 for Windows | Installing and using Red Hat build of OpenJDK 11 for Windows Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/installing_and_using_red_hat_build_of_openjdk_11_for_windows/index |
Preface | Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 7.2 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. Capabilities and limits of Red Hat Enterprise Linux 7 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/pref-release_notes-preface |
Chapter 8. Updating a cluster using the web console | Chapter 8. Updating a cluster using the web console You can update, or upgrade, an OpenShift Container Platform cluster by using the web console. The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions. Note Use the web console or oc adm upgrade channel <channel> to change the update channel. You can follow the steps in Updating a cluster using the CLI to complete the update after you change to a 4.10 channel. 8.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state . Support for RHEL7 workers is removed in OpenShift Container Platform 4.10. You must replace RHEL7 workers with RHEL8 or RHCOS workers before upgrading to OpenShift Container Platform 4.10. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information. Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. To accommodate the time it takes to update, you are able to do a partial update by updating the worker or custom pool nodes. You can pause and resume within the progress bar of each pool. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. Additional resources Support policy for unmanaged Operators 8.2. Performing a canary rollout update In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster. These use cases include, but are not limited to: You have mission-critical applications that you do not want unavailable during the update. You can slowly test the applications on your nodes in small batches after the update. You have a small maintenance window that does not allow the time for all nodes to be updated, or you have multiple maintenance windows. The rolling update process is not a typical update workflow. With larger clusters, it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that can affect the entire cluster. It is recommended that you carefully consider whether your organization wants to use a rolling update and carefully plan the implementation of the process before you start. The rolling update process described in this topic involves: Creating one or more custom machine config pools (MCPs). Labeling each node that you do not want to update immediately to move those nodes to the custom MCPs. Pausing those custom MCPs, which prevents updates to those nodes. Performing the cluster update. Unpausing one custom MCP, which triggers the update on those nodes. Testing the applications on those nodes to make sure the applications work as expected on those newly-updated nodes. Optionally removing the custom labels from the remaining nodes in small batches and testing the applications on those nodes. Note Pausing an MCP prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically-rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the kube-apiserver-to-kubelet-signer CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer CA certificate expires and the MCO attempts to automatically renew the certificate, the new certificate is created but not applied across the nodes in the respective machine config pool. This causes failure in multiple oc commands, including but not limited to oc debug , oc logs , oc exec , and oc attach . Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer CA certificate expiration and for short periods of time only. If you want to use the canary rollout update process, see Performing a canary rollout update . 8.3. Pausing a MachineHealthCheck resource by using the web console During the upgrade process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Compute MachineHealthChecks . To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to each MachineHealthCheck resource. For example, to add the annotation to the machine-api-termination-handler resource, complete the following steps: Click the Options menu to the machine-api-termination-handler and click Edit annotations . In the Edit annotations dialog, click Add more . In the Key and Value fields, add cluster.x-k8s.io/paused and "" values, respectively, and click Save . 8.4. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your upgrade fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources For information on which machine configuration changes require a reboot, see the note in Understanding the Machine Config Operator . 8.5. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with admin privileges. Pause all MachineHealthCheck resources. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.10 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are upgrading your cluster to the minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are upgraded before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. 8.6. Changing the update server by using the web console Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. Procedure Navigate to Administration Cluster Settings , click version . Click the YAML tab and then edit the upstream parameter value: Example output ... spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1 ... 1 The <update-server-url> variable specifies the URL for the update server. The default upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Click Save . Additional resources Understanding update channels and releases | [
"spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/updating_clusters/updating-cluster-within-minor |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Z clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process for your environment: Internal Attached Devices mode Deploy using local storage devices External mode | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_z/preface-ibm-z |
1.6. Pre-installation Script | 1.6. Pre-installation Script You can add commands to run on the system immediately after the ks.cfg has been parsed. This section must be at the end of the kickstart file (after the commands) and must start with the %pre command. You can access the network in the %pre section; however, name service has not been configured at this point, so only IP addresses work. Note Note that the pre-install script is not run in the change root environment. --interpreter /usr/bin/python Allows you to specify a different scripting language, such as Python. Replace /usr/bin/python with the scripting language of your choice. 1.6.1. Example Here is an example %pre section: This script determines the number of hard drives in the system and writes a text file with a different partitioning scheme depending on whether it has one or two drives. Instead of having a set of partitioning commands in the kickstart file, include the line: The partitioning commands selected in the script are used. Note The pre-installation script section of kickstart cannot manage multiple install trees or source media. This information must be included for each created ks.cfg file, as the pre-installation script occurs during the second stage of the installation process. | [
"%pre #!/bin/sh hds=\"\" mymedia=\"\" for file in /proc/ide/h* do mymedia=`cat USDfile/media` if [ USDmymedia == \"disk\" ] ; then hds=\"USDhds `basename USDfile`\" fi done set USDhds numhd=`echo USD#` drive1=`echo USDhds | cut -d' ' -f1` drive2=`echo USDhds | cut -d' ' -f2` #Write out partition scheme based on whether there are 1 or 2 hard drives if [ USDnumhd == \"2\" ] ; then #2 drives echo \"#partitioning scheme generated in %pre for 2 drives\" > /tmp/part-include echo \"clearpart --all\" >> /tmp/part-include echo \"part /boot --fstype ext3 --size 75 --ondisk hda\" >> /tmp/part-include echo \"part / --fstype ext3 --size 1 --grow --ondisk hda\" >> /tmp/part-include echo \"part swap --recommended --ondisk USDdrive1\" >> /tmp/part-include echo \"part /home --fstype ext3 --size 1 --grow --ondisk hdb\" >> /tmp/part-include else #1 drive echo \"#partitioning scheme generated in %pre for 1 drive\" > /tmp/part-include echo \"clearpart --all\" >> /tmp/part-include echo \"part /boot --fstype ext3 --size 75\" >> /tmp/part-includ echo \"part swap --recommended\" >> /tmp/part-include echo \"part / --fstype ext3 --size 2048\" >> /tmp/part-include echo \"part /home --fstype ext3 --size 2048 --grow\" >> /tmp/part-include fi",
"%include /tmp/part-include"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kickstart_installations-pre_installation_script |
Appendix C. Understanding the node_replace_inventory.yml file | Appendix C. Understanding the node_replace_inventory.yml file The node_replace_inventory.yml file is an example Ansible inventory file that you can use to prepare a replacement host for your Red Hat Hyperconverged Infrastructure for Virtualization cluster. You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/node_replace_inventory.yml on any hyperconverged host. C.1. Configuration parameters for node replacement hosts (required) Defines one active host in the cluster using the back-end FQDN. gluster_maintenance_old_node (required) Defines the backend FQDN of the node being replaced. gluster_maintenance_new_node (required) Defines the backend FQDN of the replacement node. gluster_maintenance_cluster_node (required) An active node in the cluster. Cannot be the same as gluster_maintenance_cluster_node_2 . gluster_maintenance_cluster_node_2 (required) An active node in the cluster. Cannot be the same as gluster_maintenance_cluster_node . C.2. Example node_replace_inventory.yml | [
"cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: [common host configuration]",
"cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_old_node: host1-backend-fqdn.example.com",
"cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_new_node: new-host-backend-fqdn.example.com",
"cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_cluster_node: host2-backend-fqdn.example.com",
"cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_cluster_node_2: host3-backend-fqdn.example.com",
"cluster_node: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_old_node: host1-backend-fqdn.example.com gluster_maintenance_new_node: new-host-backend-fqdn.example.com gluster_maintenance_cluster_node: host2-backend-fqdn.example.com gluster_maintenance_cluster_node_2: host3-backend-fqdn.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/understanding-the-node_replace_inventory-yml-file |
Chapter 29. Improving network latency using TCP_NODELAY | Chapter 29. Improving network latency using TCP_NODELAY By default, TCP uses Nagle's algorithm to collect small outgoing packets to send all at once. This can cause higher rates of latency. Prerequisites You have administrator privileges. 29.1. The effects of using TCP_NODELAY Applications that require low latency on every packet sent must be run on sockets with the TCP_NODELAY option enabled. This sends buffer writes to the kernel as soon as an event occurs. Note For TCP_NODELAY to be effective, applications must avoid doing small, logically related buffer writes. Otherwise, these small writes cause TCP to send these multiple buffers as individual packets, resulting in poor overall performance. If applications have several buffers that are logically related and must be sent as one packet, apply one of the following workarounds to avoid poor performance: Build a contiguous packet in memory and then send the logical packet to TCP on a socket configured with TCP_NODELAY . Create an I/O vector and pass it to the kernel using the writev command on a socket configured with TCP_NODELAY . Use the TCP_CORK option. TCP_CORK tells TCP to wait for the application to remove the cork before sending any packets. This command causes the buffers it receives to be appended to the existing buffers. This allows applications to build a packet in kernel space, which can be required when using different libraries that provide abstractions for layers. When a logical packet has been built in the kernel by the various components in the application, the socket should be uncorked, allowing TCP to send the accumulated logical packet immediately. 29.2. Enabling TCP_NODELAY The TCP_NODELAY option sends buffer writes to the kernel when events occur, with no delays. Enable TCP_NODELAY using the setsockopt() function. Procedure Add the following lines to the TCP application's .c file. Save the file and exit the editor. Apply one of the following workarounds to prevent poor performance. Build a contiguous packet in memory and then send the logical packet to TCP on a socket configured with TCP_NODELAY . Create an I/O vector and pass it to the kernel using writev on a socket configured with TCP_NODELAY . 29.3. Enabling TCP_CORK The TCP_CORK option prevents TCP from sending any packets until the socket is "uncorked". Procedure Add the following lines to the TCP application's .c file. Save the file and exit the editor. After the logical packet has been built in the kernel by the various components in the application, disable TCP_CORK . TCP sends the accumulated logical packet immediately, without waiting for any further packets from the application. 29.4. Additional resources tcp(7) , setsockopt(3p) , and setsockopt(2) man pages on your system | [
"int one = 1; setsockopt(descriptor, SOL_TCP, TCP_NODELAY, &one, sizeof(one));",
"int one = 1; setsockopt(descriptor, SOL_TCP, TCP_CORK, &one, sizeof(one));",
"int zero = 0; setsockopt(descriptor, SOL_TCP, TCP_CORK, &zero, sizeof(zero));"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_improving-network-latency-using-tcp_nodelay_optimizing-rhel9-for-real-time-for-low-latency-operation |
8.181. polkit | 8.181. polkit 8.181.1. RHBA-2014:1533 - polkit bug fix and enhancement update The updated polkit packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. PolicyKit is a toolkit for defining and handling authorizations. Bug Fixes BZ# 628862 Previously, running the pkaction command with invalid arguments opened the corresponding manual page instead of generating a warning, or giving any other indication of erroneous behavior. With this update, the user is informed by an error message. BZ# 864613 Prior to this update, in PolicyKit local authority, the order of processing configuration files within a directory depended only on file system specifics. The ordering has been made consistent to avoid surprising changes in behavior but remains unspecified and may change in future updates of Red Hat Enterprise Linux; use the documented ordering of directory names if your configuration relies on ordering of the .pkla configuration files. BZ# 1132830 Prior to this update, if a process subject to an authorization query became a zombie before completing the authorization, the polkitd daemon could terminate unexpectedly. Handling of zombie processes has been improved to fix this crash. In addition, this update adds the following Enhancements BZ# 927406 With this update, all polkit binary files have been compiled with the RELRO option, and where applicable, with the PIE option, to increase resilience against various attacks. BZ# 812684 With this update, more flexibility in polkit rules is allowed. In addition to the existing "unix-user:" and "unix-group:" identity specifications, a new specification "default" can be used to specify authorization result for users that do not match either of the "unix-user:" or "unix-group:" specifications. Users of polkit are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/polkit |
Chapter 10. Federal Standards and Regulations | Chapter 10. Federal Standards and Regulations 10.1. Introduction In order to maintain security levels, it is possible for your organization to make efforts to comply with federal and industry security specifications, standards and regulations. This chapter describes some of these standards and regulations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-security_guide-federal_standards_and_regulations |
Chapter 8. Working with clusters | Chapter 8. Working with clusters 8.1. Viewing system event information in an OpenShift Container Platform cluster Events in OpenShift Container Platform are modeled based on events that happen to API objects in an OpenShift Container Platform cluster. 8.1.1. Understanding events Events allow OpenShift Container Platform to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 8.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal To view events in your project from the OpenShift Container Platform console. Launch the OpenShift Container Platform console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 8.1.3. List of events This section describes the events of OpenShift Container Platform. Table 8.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 8.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 8.3. Health events Name Description Unhealthy Container is unhealthy. Table 8.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 8.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 8.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 8.7. Pod worker events Name Description FailedSync Pod sync failed. Table 8.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 8.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 8.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 8.11. Network events (openshift-sdn) Name Description Starting Starting OpenShift SDN. NetworkFailed The pod's network interface has been lost and the pod will be stopped. Table 8.12. Network events (kube-proxy) Name Description NeedPods The service-port <serviceName>:<port> needs pods. Table 8.13. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 8.14. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 8.15. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 8.16. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 8.17. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 8.18. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 8.2. Estimating the number of pods your OpenShift Container Platform nodes can hold As a cluster administrator, you can use the OpenShift Cluster Capacity Tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 8.2.1. Understanding the OpenShift Cluster Capacity Tool The OpenShift Cluster Capacity Tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the OpenShift Cluster Capacity Tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Container Platform cluster. Running the tool as job inside of a pod enables you to run it multiple times without intervention. 8.2.2. Running the OpenShift Cluster Capacity Tool on the command line You can run the OpenShift Cluster Capacity Tool from the command line to estimate the number of pods that can be scheduled onto your cluster. You create a sample pod spec file, which the tool uses for estimating resource usage. The pod spec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample pod spec file: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the cluster role: USD oc create -f <file_name>.yaml For example: USD oc create -f pod-spec.yaml Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml \ --verbose where: <pod_spec>.yaml Specifies the pod spec to use. verbose Outputs a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 8.2.3. Running the OpenShift Cluster Capacity Tool as a job inside a pod Running the OpenShift Cluster Capacity Tool as a job inside of a pod allows you to run the tool multiple times without needing user intervention. You run the OpenShift Cluster Capacity Tool as a job by using a ConfigMap object. Prerequisites Download and install OpenShift Cluster Capacity Tool . Procedure To run the cluster capacity tool: Create the cluster role: Create a YAML file similar to the following: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] Create the cluster role by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create sa cluster-capacity-sa Create the service account: USD oc create sa cluster-capacity-sa -n default Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:<namespace>:cluster-capacity-sa where: <namespace> Specifies the namespace where the pod is located. Define and create the pod spec: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the pod by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create -f pod.yaml Created a config map object by running the following command: USD oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml The cluster capacity analysis is mounted in a volume using a config map object named cluster-capacity-configmap to mount the input pod spec file pod.yaml into a volume test-volume at the path /test-pod . Create the job using the below example of a job specification file: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod by running the following command: USD oc create -f cluster-capacity-job.yaml Verification Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 8.3. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an OpenShift Container Platform cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 8.3.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 8.3.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 8.3.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. OpenShift Container Platform calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 8.3.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 8.3.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an OpenShift image registry. When pushing images to an OpenShift image registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an OpenShift image registry. Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quotas. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 8.3.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the OpenShift image registry. Internal and external references are not distinguished. 8.3.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 8.3.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an OpenShift image registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: USD oc create -f <limit_range_file> -n <project> 1 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 8.3.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : USD oc get limits -n demoproject NAME CREATED AT resource-limits 2020-07-15T17:14:23Z Describe the LimitRange object you are interested in, for example the resource-limits limit range: USD oc describe limits resource-limits -n demoproject Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - - 8.3.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: USD oc delete limits <limit_name> 8.4. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 8.4.1. Understanding managing application memory It is recommended to fully read the overview of how OpenShift Container Platform manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), OpenShift Container Platform allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the OpenShift Container Platform scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, OpenShift Container Platform prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 8.4.1.1. Managing application memory strategy The steps for sizing application memory on OpenShift Container Platform are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some OpenShift Container Platform clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. Additional resources Understanding compute resources and containers 8.4.2. Understanding OpenJDK settings for OpenShift Container Platform The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 8.4.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 8.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. The OpenShift Container Platform Jenkins maven slave image uses the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 8.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , MAVEN_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the slave image, the OpenShift Container Platform Jenkins maven slave image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 8.4.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod by running the following command: USD oc create -f <file-name>.yaml Verification Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 8.4.4. Understanding OOM kill policy OpenShift Container Platform can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 8.4.5. Understanding pod eviction OpenShift Container Platform may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 8.5. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes. You can configure cluster-level overcommit using the ClusterResourceOverride Operator to override the ratio between requests and limits set on developer containers. In conjunction with node overcommit and project memory and CPU limits and defaults , you can adjust the resource limit and request to achieve the desired level of overcommit. Note In OpenShift Container Platform, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node . 8.5.1. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted. 8.5.2. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. You must install the Cluster Resource Override Operator using the OpenShift Container Platform console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. 8.5.2.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "4.11" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 8.5.3. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 8.5.3.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 8.5.3.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 8.5.3.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 8.5.3.2. Understanding overcomitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 8.19. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 8.5.3.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 8.5.3.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 8.5.3.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 8.5.3.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 8.5.3.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 8.5.3.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 8.5.4. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 8.5.4.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Edit the namespace object to add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" 1 # ... 1 Setting this annotation to false disables overcommit for this namespace. 8.5.5. Additional resources Setting deployment resources . Allocating resources for nodes . 8.6. Enabling OpenShift Container Platform features using FeatureGates As an administrator, you can use feature gates to enable features that are not part of the default set of features. 8.6.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these tech preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: Microsoft Azure File CSI Driver Operator. Enables the provisioning of persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage. CSI automatic migration. Enables automatic migration for supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Supported for: Amazon Web Services (AWS) Elastic Block Storage (EBS) Google Compute Engine Persistent Disk Azure File VMware vSphere Cluster Cloud Controller Manager Operator. Enables the Cluster Cloud Controller Manager Operator rather than the in-tree cloud controller. Available as a Technology Preview for: Alibaba Cloud Amazon Web Services (AWS) Google Cloud Platform (GCP) IBM Cloud Microsoft Azure Red Hat OpenStack Platform (RHOSP) VMware vSphere Shared resource CSI driver CSI volume support for the OpenShift Container Platform build system Swap memory on nodes Cluster API. Enables the integrated upstream Cluster API in OpenShift Container Platform with the ClusterAPIEnabled feature gate. Available as a Technology Preview for: Amazon Web Services (AWS) Google Cloud Platform (GCP) Managing alerting rules for core platform monitoring Additional resources For more information about the features activated by the TechPreviewNoUpgrade feature gate, see the following topics: CSI automatic migration Cluster Cloud Controller Manager Operator Source-to-image (S2I) build volumes and Docker build volumes Swap memory on nodes Managing alerting rules for core platform monitoring 8.6.2. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.6.3. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator need change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are pre-defined with carefully tuned values to control the reaction of the cluster to increased latency. No need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 8.7.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you control the reaction of the cluster to latency issues without needing to determine the best values using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. While the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates it's status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds ( node-monitor-grace-period ). The Kubernetes Controller Manager waits 40 seconds for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod on that node has the NoExecute taint, the pod is run according to tolerationSeconds . If the pod has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 8.7.2. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. | [
"oc get events [-n <project>] 1",
"oc get events -n openshift-config",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi",
"oc create -f <file_name>.yaml",
"oc create -f pod-spec.yaml",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ose-cluster-capacity",
"podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]",
"oc create -f <file_name>.yaml",
"oc create sa cluster-capacity-sa",
"oc create sa cluster-capacity-sa -n default",
"oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi",
"oc create -f <file_name>.yaml",
"oc create -f pod.yaml",
"oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml",
"apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap",
"oc create -f cluster-capacity-job.yaml",
"oc logs jobs/cluster-capacity-job",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"",
"oc create -f <limit_range_file> -n <project> 1",
"oc get limits -n demoproject",
"NAME CREATED AT resource-limits 2020-07-15T17:14:23Z",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -",
"oc delete limits <limit_name>",
"-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.",
"JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"",
"apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi",
"oc create -f <file-name>.yaml",
"oc rsh test",
"env | grep MEMORY | sort",
"MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184",
"oc rsh test",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 0",
"sed -e '' </dev/zero",
"Killed",
"echo USD?",
"137",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 1",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m",
"oc get pod test -o yaml",
"status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.11\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1",
"oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5",
"- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/nodes/working-with-clusters |
Planning your deployment | Planning your deployment Red Hat OpenShift Data Foundation 4.18 Important considerations when deploying Red Hat OpenShift Data Foundation 4.18 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see Installing on IBM Power . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. Note OpenShift Data Foundation's default configuration for MCG is optimized for low resource consumption and not performance. If you plan to use MCG often, see information about increasing resource limits in the knowledebase article Performance tuning guide for Multicloud Object Gateway . 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. For Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly a pre-existing one. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions . Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE ROSA with hosted control planes (HCP) Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications. Chapter 4. External storage services Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters. Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. OpenShift Data Foundation supports cluster wide encryption with and without Key Management System (KMS). Cluster wide encryption with KMS is supported using the following service providers: HashiCorp Vault Thales Cipher Trust Manager Security common practices require periodic encryption key rotation. OpenShift Data Foundation automatically rotates encryption keys stored in kubernetes secret (non-KMS) and Vault on a weekly basis. However, key rotation for Vault KMS must be enabled after the storage cluster creation and does not happen by default. For more information refer to the deployment guides. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected then the administrator has to provide the vault token that provides access to the backend path in Vault, where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected then the administrator has to provide the name of the role configured in Vault that provides access to the backend path, where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol (msgr2) Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment while the cluster is being created. See the deployment guide for your environment for instructions on enabling data encryption in-transit during cluster creation. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx. Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx. Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation. Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. Virtualized OpenShift nodes using logical CPU threads, also known as simultaneous multithreading (SMT) for AMD EPYC CPUs or hyperthreading with Intel CPUs, calculate their core utilization for OpenShift subscriptions based on the number of cores/CPUs assigned to the node, however each subscription covers 4 vCPUs/cores when logical CPU threads are used. Red Hat's subscription management tools assume logical CPU threads are enabled by default on all systems. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.17 is supported only on OpenShift Container Platform version 4.17 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 7.0 or later vSphere 8.0 or later For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, an azure disk via the azure-disk provisioner. 7.1.5. Google Cloud Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.9. ROSA with hosted control planes (HCP) Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides AWS EBS volumes via gp3-csi provisioner. 7.1.10. Any platform Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate avaliable resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.3.6. Resource requirements for performance profiles OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment. Table 7.7. Recommended resource requirement for different performance profiles Performance profile CPU Memory Lean 24 72 GiB Balanced 30 72 GiB Performance 45 96 GiB Important Make sure to select the profiles based on the available free resources as you might already be running other workloads. 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.8. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.9. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks . You can configure your Multus networks to use IPv4 or IPv6 as a technology preview. This works only for Multus networks that are pure IPv4 or pure IPv6. Networks cannot be mixed mode. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information. 8.2.1. Multus prerequisites In order for Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. This section will help clarify questions that could arise. Two basic requirements must be met: OpenShift hosts must be able to route successfully to the Multus public network. Pods on the Multus public network must be able to route successfully to OpenShift hosts. These two requirements can be broken down further as follows: For routing Kubernetes hosts to the Multus public network, each host must ensure the following: The host must have an interface connected to the Multus public network (the "public-network-interface"). The "public-network-interface" must have an IP address. A route must exist to direct traffic destined for pods on the Multus public network through the "public-network-interface". For routing pods on the Multus public network to Kubernetes hosts, the public NetworkAttachmentDefinition must be configured to ensure the following: The definition must have its IP Address Management (IPAM) configured to route traffic destined for nodes through the network. To ensure routing between the two networks works properly, no IP address assigned to a node can overlap with any IP address assigned to a pod on the Multus public network. Generally, both the NetworkAttachmentDefinition, and node configurations must use the same network technology (Macvlan) to connect to the Multus public network. Node configurations and pod configurations are interrelated and tightly coupled. Both must be planned at the same time, and OpenShift Data Foundation cannot support Multus public networks without both. The "public-network-interface" must be the same for both. Generally, the connection technology (Macvlan) should also be the same for both. IP range(s) in the NetworkAttachmentDefinition must be encoded as routes on nodes, and, in mirror, IP ranges for nodes must be encoded as routes in the NetworkAttachmentDefinition. Some installations might not want to use the same public network IP address range for both pods and nodes. In the case where there are different ranges for pods and nodes, additional steps must be taken to ensure each range routes to the other so that they act as a single, contiguous network.These requirements require careful planning. See Multus examples to help understand and implement these requirements. Tip There are often ten or more OpenShift Data Foundation pods per storage node. The pod address space usually needs to be several times larger (or more) than the host address space. OpenShift Container Platform recommends using the NMState operator's NodeNetworkConfigurationPolicies as a good method of configuring hosts to meet host requirements. Other methods can be used as well if needed. 8.2.1.1. Multus network address space sizing Networks must have enough addresses to account for the number of storage pods that will attach to the network, plus some additional space to account for failover events. It is highly recommended to also plan ahead for future storage cluster expansion and estimate how large the OpenShift Container Platform and OpenShift Data Foundation clusters may grow in the future. Reserving addresses for future expansion means that there is lower risk of depleting the IP address pool unexpectedly during expansion. It is safest to allocate 25% more addresses (or more) than the total maximum number of addresses that are expected to be needed at one time in the storage cluster's lifetime. This helps lower the risk of depleting the IP address pool during failover and maintenance. For ease of writing corresponding network CIDR configurations, rounding totals up to the nearest power of 2 is also recommended. Three ranges must be planned: If used, the public Network Attachment Definition address space must include enough IPs for the total number of ODF pods running in the openshift-storage namespace If used, the cluster Network Attachment Definition address space must include enough IPs for the total number of OSD pods running in the openshift-storage namespace If the Multus public network is used, the node public network address space must include enough IPs for the total number of OpenShift nodes connected to the Multus public network. Note If the cluster uses a unified address space for the public Network Attachment Definition and node public network attachments, add these two requirements together. This is relevant, for example, if DHCP is used to manage IPs for the public network. Important For users with environments with piecewise CIDRs, that is one network with two or more different CIDRs, auto-detection is likely to find only a single CIDR, meaning Ceph daemons may fail to start or fail to connect to the network. See this knowledgebase article for information to mitigate this issue. 8.2.1.1.1. Recommendation The following recommendation suffices for most organizations. The recommendation uses the last 6.25% (1/16) of the reserved private address space (192.168.0.0/16), assuming the beginning of the range is in use or otherwise desirable. Approximate maximums (accounting for 25% overhead) are given. Table 8.1. Multus recommendations Network Network range CIDR Approximate maximums Public Network Attachment Definition 192.168.240.0/21 1,600 total ODF pods Cluster Network Attachment Definition 192.168.248.0/22 800 OSDs Node public network attachments 192.168.252.0/23 400 total nodes 8.2.1.1.2. Calculation More detailed address space sizes can be determined as follows: Determine the maximum number of OSDs that are likely to be needed in the future. Add 25%, then add 5. Round the result up to the nearest power of 2. This is the cluster address space size. Begin with the un-rounded number calculated in step 1. Add 64, then add 25%. Round the result up to the nearest power of 2. This is the public address space size for pods. Determine the maximum number of total OpenShift nodes (including storage nodes) that are likely to be needed in the future. Add 25%. Round the result up to the nearest power of 2. This is the public address space size for nodes. 8.2.1.2. Verifying requirements have been met After configuring nodes and creating the Multus public NetworkAttachmentDefinition (see Creating network attachment definitions ) check that the node configurations and NetworkAttachmentDefinition configurations are compatible. To do so, verify that each node can ping pods via the public network. Start a daemonset similar to the following example: List the Multus public network IPs assigned to test pods using a command like the following example. This example command lists all IPs assigned to all test pods (each will have 2 IPs). From the output, it is easy to manually extract the IPs associated with the Multus public network. In the example, test pod IPs on the Multus public network are: 192.168.20.22 192.168.20.29 192.168.20.23 Check that each node (NODE) can reach all test pod IPs over the public network: If any node does not get a successful ping to a running pod, it is not safe to proceed. Diagnose and fix the issue, then repeat this testing. Some reasons you may encounter a problem include: The host may not be properly attached to the Multus public network (via Macvlan) The host may not be properly configured to route to the pod IP range The public NetworkAttachmentDefinition may not be properly configured to route back to the host IP range The host may have a firewall rule blocking the connection in either direction The network switch may have a firewall or security rule blocking the connection Suggested debugging steps: Ensure nodes can ping each other over using public network "shim" IPs Ensure the output of ip address 8.2.2. Multus examples The relevant network plan for this cluster is as follows: A dedicated NIC provides eth0 for the Multus public network Macvlan will be used to attach OpenShift pods to eth0 The IP range 192.168.0.0/16 is free in the example cluster - pods and nodes will share this IP range on the Multus public network Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 Kubernetes hosts, more than the example organization will ever need) Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) The example organization does not want to use DHCP unless necessary; therefore, nodes will have IPs on the Multus network (via eth0) assigned statically using the NMState operator 's NodeNetworkConfigurationPolicy resources With DHCP unavailable, Whereabouts will be used to assign IPs to the Multus public network because it is easy to use out of the box There are 3 compute nodes in the OpenShift cluster on which OpenShift Data Foundation also runs: compute-0, compute-1, and compute-2 Nodes' network policies must be configured to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Generally speaking, the host must connect to the Multus public network using the same technology that pods do. Pod connections are configured in the Network Attachment Definition. Because the host IP range is a subset of the whole range, hosts are not able to route to pods simply by IP assignment. A route must be added to hosts to allow them to route to the whole 192.168.0.0/16 range. NodeNetworkConfigurationPolicy desiredState specs will look like the following: For static IP management, each node must have a different NodeNetworkConfigurationPolicy. Select separate nodes for each policy to configure static networks. A "shim" interface is used to connect hosts to the Multus public network using the same technology as the Network Attachment Definition will use. The host's "shim" must be of the same type as planned for pods, macvlan in this example. The interface must match the Multus public network interface selected in planning, eth0 in this example. The ipv4 (or ipv6` ) section configures node IP addresses on the Multus public network. IPs assigned to this node's shim must match the plan. This example uses 192.168.252.0/22 for node IPs on the Multus public network. For static IP management, don't forget to change the IP for each node. The routes section instructs nodes how to reach pods on the Multus public network. The route destination(s) must match the CIDR range planned for pods. In this case, it is safe to use the entire 192.168.0.0/16 range because it won't affect nodes' ability to reach other nodes over their "shim" interfaces. In general, this must match the CIDR used in the Multus public NetworkAttachmentDefinition. The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' exclude option to simplify the range request. The Whereabouts routes[].dst option ensures pods route to hosts via the Multus public network. This must match the plan for how to attach pods to the Multus public network. Nodes must attach using the same technology, Macvlan. The interface must match the Multus public network interface selected in planning, eth0 in this example. The plan for this example uses whereabouts instead of DHCP for assigning IPs to pods. For this example, it was decided that pods could be assigned any IP in the range 192.168.0.0/16 with the exception of a portion of the range allocated to nodes (see 5). whereabouts provides an exclude directive that allows easily excluding the range allocated for nodes from its pool. This allows keeping the range directive (see 4 ) simple. The routes section instructs pods how to reach nodes on the Multus public network. The route destination ( dst ) must match the CIDR range planned for nodes. 8.2.3. Holder pod deprecation Due to the recurring maintenance impact of holder pods during upgrade (holder pods are present when Multus is enabled), holder pods are deprecated in the ODF v4.18 release and targeted for removal in the ODF v4.18 release. This deprecation requires completing additional network configuration actions before removing the holder pods. In ODF v4.16, clusters with Multus enabled are upgraded to v4.17 following standard upgrade procedures. After the ODF cluster (with Multus enabled) is successfully upgraded to v4.17, administrators must then complete the procedure documented in the article Disabling Multus holder pods to disable and remove holder pods. Be aware that this disabling procedure is time consuming; however, it is not critical to complete the entire process immediately after upgrading to v4.17. It is critical to complete the process before ODF is upgraded to v4.18. 8.2.4. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.5. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.6. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports the macvlan driver, which includes the following features: Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Uses less CPU and provides better throughput than Linux bridge or ipvlan . Bridge mode is almost always the best choice. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.7. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal. Chapter 9. Disaster Recovery Disaster Recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. OpenShift Data Foundation provides High Availability (HA) & DR solutions for stateful apps which are broadly categorized into two broad categories: Metro-DR : Single Region and cross data center protection with no data loss. Regional-DR : Cross Region protection with minimal potential data loss. Disaster Recovery with stretch cluster : Single OpenShift Data Foundation cluster is stretched between two different locations to provide the storage infrastructure with disaster recovery capabilities. 9.1. Metro-DR Metropolitan disaster recovery (Metro-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM), Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. This release of Metro-DR solution provides volume persistent data and metadata replication across sites that are geographically dispersed. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. Note Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. For detailed solution requirements, see Metro-DR requirements , deployment requirements for Red Hat Ceph Storage stretch cluster with arbiter and RHACM requirements . 9.2. Regional-DR Regional disaster recovery (Regional-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM) and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. It is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. Red Hat OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook and it's enhanced with the ability to: Enable pools for mirroring. Automatically mirror images across RBD pools. Provides csi-addons to manage per Persistent Volume Claim mirroring. This release of Regional-DR supports Multi-Cluster configuration that is deployed across different regions and data centers. For example, a 2-way replication across two managed clusters located in two different regions or data centers. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Regional disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For detailed solution requirements, see Regional-DR requirements and RHACM requirements . 9.3. Disaster Recovery with stretch cluster In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This feature is currently intended for deployment in the OpenShift Container Platform on-premises and in the same location. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. To use the stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see Configuring OpenShift Data Foundation for stretch cluster . To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Supported Unsupported Disaster Recovery for brownfield deployments Unsupported Supported Automatic scaling of RGW Unsupported Unsupported Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z Deploying OpenShift Data Foundation on any platform External mode Deploying OpenShift Data Foundation in external mode Internal or external For deploying multiple clusters, see Deploying multiple OpenShift Data Foundation clusters . | [
"apiVersion: apps/v1 kind: DaemonSet metadata: name: multus-public-test namespace: openshift-storage labels: app: multus-public-test spec: selector: matchLabels: app: multus-public-test template: metadata: labels: app: multus-public-test annotations: k8s.v1.cni.cncf.io/networks: openshift-storage/public-net # spec: containers: - name: test image: quay.io/ceph/ceph:v18 # image known to have 'ping' installed command: - sleep - infinity resources: {}",
"oc -n openshift-storage describe pod -l app=multus-public-test | grep -o -E 'Add .* from .*' Add eth0 [10.128.2.86/23] from ovn-kubernetes Add net1 [192.168.20.22/24] from default/public-net Add eth0 [10.129.2.173/23] from ovn-kubernetes Add net1 [192.168.20.29/24] from default/public-net Add eth0 [10.131.0.108/23] from ovn-kubernetes Add net1 [192.168.20.23/24] from default/public-net",
"oc debug node/NODE Starting pod/NODE-debug To use host binaries, run `chroot /host` Pod IP: **** If you don't see a command prompt, try pressing enter. sh-5.1# chroot /host sh-5.1# ping 192.168.20.22 PING 192.168.20.22 (192.168.20.22) 56(84) bytes of data. 64 bytes from 192.168.20.22: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 192.168.20.22: icmp_seq=2 ttl=64 time=0.056 ms ^C --- 192.168.20.22 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1046ms rtt min/avg/max/mdev = 0.056/0.074/0.093/0.018 ms sh-5.1# ping 192.168.20.29 PING 192.168.20.29 (192.168.20.29) 56(84) bytes of data. 64 bytes from 192.168.20.29: icmp_seq=1 ttl=64 time=0.403 ms 64 bytes from 192.168.20.29: icmp_seq=2 ttl=64 time=0.181 ms ^C --- 192.168.20.29 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1007ms rtt min/avg/max/mdev = 0.181/0.292/0.403/0.111 ms sh-5.1# ping 192.168.20.23 PING 192.168.20.23 (192.168.20.23) 56(84) bytes of data. 64 bytes from 192.168.20.23: icmp_seq=1 ttl=64 time=0.329 ms 64 bytes from 192.168.20.23: icmp_seq=2 ttl=64 time=0.227 ms ^C --- 192.168.20.23 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1047ms rtt min/avg/max/mdev = 0.227/0.278/0.329/0.051 ms",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-0 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-0 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-0 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-1 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-1 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-1 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-2 # [1] namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-2 # [2] desiredState: Interfaces: [3] - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan # [4] state: up mac-vlan: base-iface: eth0 # [5] mode: bridge promiscuous: true ipv4: # [6] enabled: true dhcp: false address: - ip: 192.168.252.2 # STATIC IP FOR compute-2 # [7] prefix-length: 22 routes: # [8] config: - destination: 192.168.0.0/16 # [9] next-hop-interface: odf-pub-shim",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", # [1] \"master\": \"eth0\", # [2] \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", # [3] \"range\": \"192.168.0.0/16\", # [4] \"exclude\": [ \"192.168.252.0/22\" # [5] ], \"routes\": [ # [6] {\"dst\": \"192.168.252.0/22\"} # [7] ] } }'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/planning_your_deployment/external-mode-requirements_rhodf |
Chapter 1. Welcome to Red Hat Advanced Cluster Management for Kubernetes | Chapter 1. Welcome to Red Hat Advanced Cluster Management for Kubernetes Kubernetes provides a platform for deploying and managing containers in a standard, consistent control plane. However, as application workloads move from development to production, they often require multiple fit-for-purpose Kubernetes clusters to support DevOps pipelines. Note: Use of this Red Hat product requires licensing and subscription agreement. Users, such as administrators and site reliability engineers, face challenges as they work across a range of environments, including multiple data centers, private clouds, and public clouds that run Kubernetes clusters. Red Hat Advanced Cluster Management for Kubernetes provides the tools and capabilities to address these common challenges. Red Hat Advanced Cluster Management for Kubernetes provides end-to-end management visibility and control to manage your Kubernetes environment. Take control of your application modernization program with management capabilities for cluster creation, application lifecycle, and provide security and compliance for all of them across hybrid cloud environments. Clusters and applications are all visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet. The Welcome page from the Red Hat Advanced Cluster Management for Kubernetes console has a header that displays the Applications switcher to return to Red Hat OpenShift Container Platform and more. The tiles describe the main functions of the product and link to important console pages. For more information, see the Console overview . With Red Hat Advanced Cluster Management for Kubernetes: Work across a range of environments, including multiple data centers, private clouds and public clouds that run Kubernetes clusters. Easily create Kubernetes clusters and offer cluster lifecycle management in a single console. Enforce policies at the target clusters using Kubernetes-supported custom resource definitions. Deploy and maintain day-two operations of business applications distributed across your cluster landscape. This guide assumes that users are familiar with Kubernetes concepts and terminology. For more information about Kubernetes concepts, see Kubernetes Documentation . See the following documentation for information about the product: Multicluster architecture Glossary of terms 1.1. Multicluster architecture Red Hat Advanced Cluster Management for Kubernetes consists of several multicluster components, which are used to access and manage your clusters. Learn more about the architecture in the following sections, then follow the links to more detailed documentation. See the following high-level multicluster terms and components: Hub cluster Managed cluster Cluster lifecycle Application lifecycle Governance Observability References 1.1.1. Hub cluster The hub cluster is the common term that is used to define the central controller that runs in a Red Hat Advanced Cluster Management for Kubernetes cluster. From the hub cluster, you can access the console and product components, as well as the Red Hat Advanced Cluster Management APIs. You can also use the console to search resources across clusters and view your topology. Additionally, you can enable observability on your hub cluster to monitor metrics from your managed clusters across your cloud providers. The Red Hat Advanced Cluster Management hub cluster uses the MultiClusterHub operator to manage, upgrade, and install hub cluster components and runs in the open-cluster-management namespace. The hub cluster aggregates information from multiple clusters by using an asynchronous work request model and search collectors. The hub cluster maintains the state of clusters and applications that run on it. The local cluster is the term used to define a hub cluster that is also a managed cluster, discussed in the following sections. 1.1.2. Managed cluster The managed cluster is the term that is used to define additional clusters that are managed by the hub cluster. The connection between the two is completed by using the klusterlet , which is the agent that is installed on the managed cluster. The managed cluster receives and applies requests from the hub cluster and enables it to service cluster lifecycle, application lifecycle, governance, and observability on the managed cluster. For example, managed clusters send metrics to the hub cluster if the observability service is enabled. See Observing environments to receive metrics and optimize the health of all managed clusters. 1.1.3. Cluster lifecycle Red Hat Advanced Cluster Management cluster lifecycle defines the process of creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers. The cluster lifecycle function is provided by the multicluster engine for Kubernetes operator, which is installed automatically with Red Hat Advanced Cluster Management. See Cluster lifecycle introduction for general information about the cluster lifecycle function. From the hub cluster console, you can view an aggregation of all cluster health statuses, or view individual health metrics of many Kubernetes clusters. Additionally, you can upgrade managed OpenShift Container Platform clusters individually or in bulk, as well as destroy any OpenShift Container Platform clusters that you created using your hub cluster. From the console, you can also hibernate, resume, and detach clusters. 1.1.4. Application lifecycle Red Hat Advanced Cluster Management Application lifecycle defines the processes that are used to manage application resources on your managed clusters. A multicluster application allows you to deploy resources on multiple managed clusters, as well as maintain full control of Kubernetes resource updates for all aspects of the application with high availability. A multicluster application uses the Kubernetes specification, but provides additional automation of the deployment and lifecycle management of resources. Ansible Automation Platform jobs allow you to automate tasks. You can also set up a continuous GitOps environment to automate application consistency across clusters in development, staging, and production environments. See Managing applications for more application topics. 1.1.5. Governance Governance enables you to define policies that either enforce security compliance, or inform you of changes that violate the configured compliance requirements for your environment. Using dynamic policy templates, you can manage the policies and compliance requirements across all of your management clusters from a central interface. For more information, see the Security overview . Additionally, learn about access requirements from the Role-based access control documentation. After you configure a Red Hat Advanced Cluster Management hub cluster and a managed cluster, you can view and create policies with the Red Hat Advanced Cluster Management policy framework. You can visit the policy-collection open community to see what policies community members created and contributed, as well as contribute your own policies for others to use. 1.1.6. Observability The Observability component collects and reports the status and health of the OpenShift Container Platform version 4.x or later, managed clusters to the hub cluster, which are visible from the Grafana dashboard. You can create custom alerts to inform you of problems with your managed clusters. Because it requires configured persistent storage, Observability must be enabled after the Red Hat Advanced Cluster Management installation. For more information about Observability, see Observing environments introduction . 1.1.7. References Learn more about the release from the Release notes . See the product Installing and upgrading section to prepare your cluster and get configuration information. See Cluster lifecycle overview for more information about the operator that provides the cluster lifecycle features. 1.2. Glossary of terms Red Hat Advanced Cluster Management for Kubernetes consists of several multicluster components that are defined in the following sections. Additionally, some common Kubernetes terms are used within the product. Terms are listed alphabetically. 1.2.1. Relevant standardized glossaries Kubernetes glossary 1.2.2. Red Hat Advanced Cluster Management for Kubernetes terms 1.2.2.1. Application lifecycle The processes that are used to manage application resources on your managed clusters. A multicluster application uses a Kubernetes specification, but with additional automation of the deployment and lifecycle management of resources to individual clusters. 1.2.2.2. Channel A custom resource definition that references repositories where Kubernetes resources are stored, such as Git repositories, Helm chart repositories, ObjectStore repositories, or namespaces templates on the hub cluster. Channels support multiple subscriptions from multiple targets. 1.2.2.3. Cluster lifecycle Defines the process of creating, importing, and managing clusters across public and private clouds. 1.2.2.4. Console The graphical user interface for Red Hat Advanced Cluster Management for Kubernetes. 1.2.2.5. Deployable A resource that retrieves the output of a build, packages the output with configuration properties, and installs the package in a pre-defined location so that it can be tested or run. 1.2.2.6. Governance The Red Hat Advanced Cluster Management for Kubernetes processes used to manage security and compliance. 1.2.2.7. Hosted cluster An OpenShift Container Platform API endpoint that is managed by HyperShift. 1.2.2.8. Hosted cluster infrastructure Resources that exist in the customer cloud account, including network, compute, storage, and so on. 1.2.2.9. Hosted control plane An OpenShift Container Platform control plane that is running on the hosting service cluster, which is exposed by the API endpoint of a hosted cluster. The component parts of a control plane include etcd , apiserver , kube-controller-manager , vpn , and other components. 1.2.2.10. Hosted control plane infrastructure Resources on the management cluster or external cloud provider that are prerequisites to running hosted control plane processes. 1.2.2.11. Hosting service cluster An OpenShift Container Platform cluster that hosts the HyperShift operator and zero-to-many hosted clusters. 1.2.2.12. Hosted service cluster infrastructure Resources of the hosting service cluster, including network, compute, storage, and so on. 1.2.2.13. Hub cluster The central controller that runs in a Red Hat Advanced Cluster Management for Kubernetes cluster. From the hub cluster, you can access the console and components found on that console, as well as APIs. 1.2.2.14. klusterlet The agent that contains two controllers on the managed cluster that initiates a connection to the Red Hat Advanced Cluster Management for Kubernetes hub cluster. 1.2.2.15. Klusterlet add-on Specialized controller on the Klusterlet that provides additional management capability. 1.2.2.16. Managed cluster Created and imported clusters are managed by the klusterlet agent and its add-ons, which initiates a connection to the Red Hat Advanced Cluster Management for Kubernetes hub cluster. 1.2.2.17. Placement binding A resource that binds a placement to a policy. 1.2.2.18. Placement policy A policy that defines where the application components are deployed and how many replicas there are. 1.2.2.19. Subscriptions A resource that identifies the Kubernetes resources within channels (resource repositories), then places the Kubernetes resource on the target clusters. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/about/welcome-to-red-hat-advanced-cluster-management-for-kubernetes |
Using Ansible plug-ins for Red Hat Developer Hub | Using Ansible plug-ins for Red Hat Developer Hub Red Hat Ansible Automation Platform 2.4 Use Ansible plug-ins for Red Hat Developer Hub Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_ansible_plug-ins_for_red_hat_developer_hub/index |
Chapter 4. Ceph File System administration | Chapter 4. Ceph File System administration As a storage administrator, you can perform common Ceph File System (CephFS) administrative tasks, such as: To map a directory to a particular MDS rank, see Section 4.4, "Mapping directory trees to Metadata Server daemon ranks" . To disassociate a directory from a MDS rank, see Section 4.5, "Disassociating directory trees from Metadata Server daemon ranks" . To work with files and directory layouts, see Section 4.8, "Working with File and Directory Layouts" . To add a new data pool, see Section 4.6, "Adding data pools" . To work with quotas, see Section 4.7, "Working with Ceph File System quotas" . To remove a Ceph File System using the command-line interface, see Section 4.12, "Removing a Ceph File System using the command-line interface" . To remove a Ceph File System using Ansible, see Section 4.13, "Removing a Ceph File System using Ansible" . To set a minimum client version, see Section 4.14, "Setting a minimum client version" . To use the ceph mds fail command, see Section 4.15, "Using the ceph mds fail command" . 4.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemons ( ceph-mds ). Create and mount the Ceph File System. 4.2. Unmounting Ceph File Systems mounted as kernel clients How to unmount a Ceph File System that is mounted as a kernel client. Prerequisites Root-level access to the node doing the mounting. Procedure To unmount a Ceph File System mounted as a kernel client: Syntax Example Additional Resources The umount(8) manual page 4.3. Unmounting Ceph File Systems mounted as FUSE clients Unmounting a Ceph File System that is mounted as a File System in User Space (FUSE) client. Prerequisites Root-level access to the FUSE client node. Procedure To unmount a Ceph File System mounted in FUSE: Syntax Example Additional Resources The ceph-fuse(8) manual page 4.4. Mapping directory trees to Metadata Server daemon ranks To map a directory and its subdirectories to a particular active Metadata Server (MDS) rank so that its metadata is only managed by the MDS daemon holding that rank. This approach enables you to evenly spread application load or limit impact of users' metadata requests to the entire storage cluster. Important An internal balancer already dynamically spreads the application load. Therefore, only map directory trees to ranks for certain carefully chosen applications. In addition, when a directory is mapped to a rank, the balancer cannot split it. Consequently, a large number of operations within the mapped directory can overload the rank and the MDS daemon that manages it. Prerequisites At least two active MDS daemons. User access to the CephFS client node. Verify that the attr package is installed on the CephFS client node with a mounted Ceph File System. Procedure Add the p flag to the Ceph user's capabilities: Syntax Example Set the ceph.dir.pin extended attribute on a directory: Syntax Example This example assigns the /temp directory and all of its subdirectories to rank 2. Additional Resources See the Layout, quota, snapshot, and network restrictions section in the Red Hat Ceph Storage File System Guide for more details about the p flag. See the Disassociating directory trees from Metadata Server daemon ranks section in the Red Hat Ceph Storage File System Guide for more details. See the Configuring multiple active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide for more details. 4.5. Disassociating directory trees from Metadata Server daemon ranks Disassociate a directory from a particular active Metadata Server (MDS) rank. Prerequisites User access to the Ceph File System (CephFS) client node. Ensure that the attr package is installed on the client node with a mounted CephFS. Procedure Set the ceph.dir.pin extended attribute to -1 on a directory: Syntax Example Note Any separately mapped subdirectories of /home/ceph-user/ are not affected. Additional Resources See the Mapping Directory Trees to MDS Ranks section in Red Hat Ceph Storage File System Guide for more details. 4.6. Adding data pools The Ceph File System (CephFS) supports adding more than one pool to be used for storing data. This can be useful for: Storing log data on reduced redundancy pools Storing user home directories on an SSD or NVMe pool Basic data segregation. Before using another data pool in the Ceph File System, you must add it as described in this section. By default, for storing file data, CephFS uses the initial data pool that was specified during its creation. To use a secondary data pool, you must also configure a part of the file system hierarchy to store file data in that pool or optionally within a namespace of that pool, using file and directory layouts. Prerequisites Root-level access to the Ceph Monitor node. Procedure Create a new data pool: Syntax Replace: POOL_NAME with the name of the pool. PG_NUMBER with the number of placement groups (PGs). Example Add the newly created pool under the control of the Metadata Servers: Syntax Replace: FS_NAME with the name of the file system. POOL_NAME with the name of the pool. Example: Verify that the pool was successfully added: Example If you use the cephx authentication, make sure that clients can access the new pool. Additional Resources See the Working with File and Directory Layouts for details. See the Creating Ceph File System Client Users for details. 4.7. Working with Ceph File System quotas As a storage administrator, you can view, set, and remove quotas on any directory in the file system. You can place quota restrictions on the number of bytes or the number of files within the directory. 4.7.1. Prerequisites Make sure that the attr package is installed. 4.7.2. Ceph File System quotas The Ceph File System (CephFS) quotas allow you to restrict the number of bytes or the number of files stored in the directory structure. Limitations CephFS quotas rely on the cooperation of the client mounting the file system to stop writing data when it reaches the configured limit. However, quotas alone cannot prevent an adversarial, untrusted client from filling the file system. Once processes that write data to the file system reach the configured limit, a short period of time elapses between when the amount of data reaches the quota limit, and when the processes stop writing data. The time period generally measures in the tenths of seconds. However, processes continue to write data during that time. The amount of additional data that the processes write depends on the amount of time elapsed before they stop. Previously, quotas were only supported with the userspace FUSE client. With Linux kernel version 4.17 or newer, the CephFS kernel client supports quotas against Ceph mimic or newer clusters. Those version requirements are met by Red Hat Enterprise Linux 8 and Red Hat Ceph Storage 4, respectively. The userspace FUSE client can be used on older and newer OS and cluster versions. The FUSE client is provided by the ceph-fuse package. When using path-based access restrictions, be sure to configure the quota on the directory to which the client is restricted, or to a directory nested beneath it. If the client has restricted access to a specific path based on the MDS capability, and the quota is configured on an ancestor directory that the client cannot access, the client will not enforce the quota. For example, if the client cannot access the /home/ directory and the quota is configured on /home/ , the client cannot enforce that quota on the directory /home/user/ . Snapshot file data that has been deleted or changed does not count towards the quota. 4.7.3. Viewing quotas Use the getfattr command and the ceph.quota extended attributes to view the quota settings for a directory. Note If the attributes appear on a directory inode, then that directory has a configured quota. If the attributes do not appear on the inode, then the directory does not have a quota set, although its parent directory might have a quota configured. If the value of the extended attribute is 0, the quota is not set. Prerequisites Make sure that the attr package is installed. Procedure To view CephFS quotas. Using a byte-limit quota: Syntax Example Using a file-limit quota: Syntax Example Additional Resources See the getfattr(1) manual page for more information. 4.7.4. Setting quotas This section describes how to use the setfattr command and the ceph.quota extended attributes to set the quota for a directory. Prerequisites Make sure that the attr package is installed. Procedure To set CephFS quotas. Using a byte-limit quota: Syntax Example In this example, 100000000 bytes equals 100 MB. Using a file-limit quota: Syntax Example In this example, 10000 equals 10,000 files. Additional Resources See the setfattr(1) manual page for more information. 4.7.5. Removing quotas This section describes how to use the setfattr command and the ceph.quota extended attributes to remove a quota from a directory. Prerequisites Make sure that the attr package is installed. Procedure To remove CephFS quotas. Using a byte-limit quota: Syntax Example Using a file-limit quota: Syntax Example Additional Resources See the setfattr(1) manual page for more information. 4.7.6. Additional Resources See the getfattr(1) manual page for more information. See the setfattr(1) manual page for more information. 4.8. Working with File and Directory Layouts As a storage administrator, you can control how file or directory data is mapped to objects. This section describes how to: Understand file and directory layouts Set file and directory layouts View file and directory layout fields View individual layout fields Remove the directory layouts 4.8.1. Prerequisites The installation of the attr package. 4.8.2. Overview of file and directory layouts This section explains what file and directory layouts are in the context for the Ceph File System. A layout of a file or directory controls how its content is mapped to Ceph RADOS objects. The directory layouts serves primarily for setting an inherited layout for new files in that directory. To view and set a file or directory layout, use virtual extended attributes or extended file attributes ( xattrs ). The name of the layout attributes depends on whether a file is a regular file or a directory: Regular files layout attributes are called ceph.file.layout . Directories layout attributes are called ceph.dir.layout . The File and Directory Layout Fields table lists available layout fields that you can set on files and directories. Layouts Inheritance Files inherit the layout of their parent directory when you create them. However, subsequent changes to the parent directory layout do not affect children. If a directory does not have any layouts set, files inherit the layout from the closest directory with layout in the directory structure. Additional Resources See the Layouts Inheritance for more details. 4.8.3. Setting file and directory layout fields Use the setfattr command to set layout fields on a file or directory. Important When you modify the layout fields of a file, the file must be empty, otherwise an error occurs. Prerequisites Root-level access to the node. Procedure To modify layout fields on a file or directory: Syntax Replace: TYPE with file or dir . FIELD with the name of the field. VALUE with the new value of the field. PATH with the path to the file or directory. Example Additional Resources See the table in the Overview of the file and directory layouts section of the Red Hat Ceph Storage File System Guide for more details. See the setfattr(1) manual page. 4.8.4. Viewing file and directory layout fields To use the getfattr command to view layout fields on a file or directory. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure To view layout fields on a file or directory as a single string: Syntax Replace PATH with the path to the file or directory. TYPE with file or dir . Example Note A directory does not have an explicit layout until you set it. Consequently, attempting to view the layout without first setting it fails because there are no changes to display. Additional Resources The getfattr(1) manual page. For more information, see Setting file and directory layouts section in the Red Hat Ceph Storage File System Guide . 4.8.5. Viewing individual layout fields Use the getfattr command to view individual layout fields for a file or directory. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure To view individual layout fields on a file or directory: Syntax Replace TYPE with file or dir . FIELD with the name of the field. PATH with the path to the file or directory. Example Note Pools in the pool field are indicated by name. However, newly created pools can be indicated by ID. Additional Resources The getfattr(1) manual page. For more information, see File and directory layout fields . 4.8.6. Removing directory layouts Use the setfattr command to remove layouts from a directory. Note When you set a file layout, you cannot change or remove it. Prerequisites A directory with a layout. Procedure To remove a layout from a directory: Syntax Example To remove the pool_namespace field: Syntax Example Note The pool_namespace field is the only field you can remove separately. Additional Resources The setfattr(1) manual page 4.9. Ceph File System snapshot considerations As a storage administrator, you can gain an understanding of the data structures, system components, and considerations to manage Ceph File System (CephFS) snapshots. Snapshots create an immutable view of a file system at the point in time of creation. You can create a snapshot within any directory, and all data in the file system under that directory is covered. 4.9.1. Storing snapshot metadata for a Ceph File System Storage of snapshot directory entries and their inodes occurs in-line as part of the directory they were in at the time of the snapshot. All directory entries include a first and last snapid for which they are valid. 4.9.2. Ceph File System snapshot writeback Ceph snapshots rely on clients to help determine which operations apply to a snapshot and flush snapshot data and metadata back to the OSD and MDS clusters. Handling snapshot writeback is an involved process because snapshots apply to subtrees of the file hierarchy, and the creation of snapshots can occur anytime. Parts of the file hierarchy that belong to the same set of snapshots are referred to by a single SnapRealm . Each snapshot applies to the subdirectory nested beneath a directory and divides the file hierarchy into multiple "realms" where all of the files contained by a realm share the same set of snapshots. The Ceph Metadata Server (MDS) controls client access to inode metadata and file data by issuing capabilities (caps) for each inode. During snapshot creation, clients acquire dirty metadata on inodes with capabilities to describe the file state at that time. When a client receives a ClientSnap message, it updates the local SnapRealm and its links to specific inodes and generates a CapSnap for the inode. Capability writeback flushes out the CapSnap and, if dirty data exists, the CapSnap is used to block new data writes until the snapshot flushes to the OSDs. The MDS generates snapshot-representing directory entries as part of the routine process for flushing them. The MDS keeps directory entries with outstanding CapSnap data pinned in memory and the journal until the writeback process flushes them. Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities. 4.9.3. Ceph File System snapshots and hard links Ceph moves an inode with multiple hard links to a dummy global SnapRealm . This dummy SnapRealm covers all snapshots in the filesystem. Any new snapshots preserve the inode's data. This preserved data covers snapshots on any linkage of the inode. 4.9.4. Updating a snapshot for a Ceph File System The process of updating a snapshot is similar to the process of deleting a snapshot. If you remove an inode out of its parent SnapRealm , Ceph generates a new SnapRealm for the renamed inode if the SnapRealm does not already exist. Ceph saves the IDs of snapshots that are effective on the original parent SnapRealm into the past_parent_snaps data structure of the new SnapRealm and then follows a process similar to creating a snapshot. Additional Resources For details about snapshot data structures, see Ceph File System snapshot data structures in Red Hat Ceph Storage File System Guide . 4.9.5. Ceph File System snapshots and multiple file systems Snapshots are known to not function properly with multiple file systems. If you have multiple file systems sharing a single Ceph pool with namespaces, their snapshots will collide, and deleting one snapshot results in missing file data for other snapshots sharing the same Ceph pool. 4.9.6. Ceph File System snapshot data structures The Ceph File System (CephFS) uses the following snapshot data structures to store data efficiently: SnapRealm A SnapRealm is created whenever you create a snapshot at a new point in the file hierarchy or when you move a snapshotted inode outside its parent snapshot. A single SnapRealm represents the parts of the file hierarchy that belong to the same set of snapshots. A SnapRealm contains a sr_t_srnode and inodes_with_caps that are part of the snapshot. sr_t An sr_t is the on-disk snapshot metadata. It contains sequence counters, time-stamps, and a list of associated snapshot IDs and the past_parent_snaps . SnapServer A SnapServer manages snapshot ID allocation, snapshot deletion, and maintaining a list of cumulative snapshots in the file system. A file system only has one instance of a SnapServer . SnapContext A SnapContext consists of a snapshot sequence ID (snapid) and all the snapshot IDs currently defined for an object. When a write operation occurs, a Ceph client provides a SnapContext to specify the set of snapshots that exist for an object. To generate a SnapContext list, Ceph combines snapids associated with the SnapRealm and all valid snapids in the past_parent_snaps data structure. File data is stored using RADOS self-managed snapshots. In a self-managed snapshot, the client must provide the current SnapContext on each write. Clients are careful to use the correct SnapContext when writing file data to the Ceph OSDs. SnapClient cached effective snapshots filter out stale snapids. SnapClient A SnapClient is used to communicate with a SnapServer and cache cumulative snapshots locally. Each Metadata Server (MDS) rank has a SnapClient instance. 4.10. Managing Ceph File System snapshots As a storage administrator, you can take a point-in-time snapshot of a Ceph File System (CephFS) directory. CephFS snapshots are asynchronous, and you can choose which directory snapshot creation occurs in. 4.10.1. Prerequisites A running and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. 4.10.2. Ceph File System snapshots A Ceph File System (CephFS) snapshot creates an immutable, point-in-time view of a Ceph File System. CephFS snapshots are asynchronous and are kept in a special hidden directory in the CephFS directory, named .snap . You can specify snapshot creation for any directory within a Ceph File System. When specifying a directory, the snapshot also includes all the subdirectories beneath it. Warning Each Ceph Metadata Server (MDS) cluster allocates the snap identifiers independently. Using snapshots for multiple Ceph File Systems that are sharing a single pool causes snapshot collisions and results in missing file data. Additional Resources See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. 4.10.3. Enabling a snapshot for a Ceph File System New Ceph File Systems enable the snapshotting feature by default, but you must manually enable the feature on existing Ceph File Systems. Prerequisites A running and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Metadata Server (MDS) node. Procedure For existing Ceph File Systems, enable the snapshotting feature: Syntax Example Additional Resources See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on creating a snapshot. See the Deleting a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on deleting a snapshot. See the Restoring a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on restoring a snapshot. 4.10.4. Creating a snapshot for a Ceph File System You can create an immutable, point-in-time view of a Ceph File System by creating a snapshot. A snapshot uses a hidden directory located in the directory to snapshot. The name of this directory is .snap by default. Prerequisites A running and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Metadata Server (MDS) node. Procedure To create a snapshot, create a new subdirectory inside the .snap directory. The snapshot name is the new subdirectory name. Syntax Example This example creates the new-snaps subdirectory on a Ceph File System that is mounted on /mnt/cephfs and informs the Ceph Metadata Server (MDS) to start making snapshots. Verification List the new snapshot directory: Syntax The new-snaps subdirectory displays under the .snap directory. Additional Resources See the Deleting a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on deleting a snapshot. See the Restoring a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on restoring a snapshot. 4.10.5. Deleting a snapshot for a Ceph File System You can delete a snapshot by removing the corresponding directory in a .snap directory. Prerequisites A running and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Creation of snapshots on a Ceph File System. Root-level access to a Ceph Metadata Server (MDS) node. Procedure To delete a snapshot, remove the corresponding directory: Syntax Example This example deletes the new-snaps subdirectory on a Ceph File System that is mounted on /mnt/cephfs . Note Contrary to a regular directory, a rmdir command succeeds even if the directory is not empty, so you do not need to use a recursive rm command. Important Attempting to delete root-level snapshots, which might contain underlying snapshots, will fail. Additional Resources See the Restoring a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on restoring a snapshot. See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on creating a snapshot. 4.10.6. Restoring a snapshot for a Ceph File System You can restore a file from a snapshot or fully restore a complete snapshot for a Ceph File System (CephFS). Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Metadata Server (MDS) node. Procedure To restore a file from a snapshot, copy it from the snapshot directory to the regular tree: Syntax Example This example restores file1 to the current directory. You can also fully restore a snapshot from the .snap directory tree. Replace the current entries with copies from the desired snapshot: Syntax Example This example removes all files and directories under dir1 and restores the files from the new-snaps snapshot to the current directory, dir1 . 4.10.7. Additional Resources See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . 4.11. Taking down a Ceph File System cluster You can take down Ceph File System (CephFS) cluster by simply setting the down flag true . Doing this gracefully shuts down the Metadata Server (MDS) daemons by flushing journals to the metadata pool and all client I/O is stopped. You can also take the CephFS cluster down quickly for testing the deletion of a file system and bring the Metadata Server (MDS) daemons down, for example, practicing a disaster recovery scenario. Doing this sets the jointable flag to prevent the MDS standby daemons from activating the file system. Prerequisites User access to the Ceph Monitor node. Procedure To mark the CephFS cluster down: Syntax Exmaple To bring the CephFS cluster back up: Syntax Exmaple or To quickly take down a CephFS cluster: Syntax Exmaple 4.12. Removing a Ceph File System using the command-line interface You can remove a Ceph File System (CephFS) using the command-line interface. Before doing so, consider backing up all the data and verifying that all clients have unmounted the file system locally. Warning This operation is destructive and will make the data stored on the Ceph File System permanently inaccessible. Prerequisites Back-up the data. All clients have unmounted the Ceph File System (CephFS). Root-level access to a Ceph Monitor node. Procedure Display the CephFS status to determine the MDS ranks. Syntax Example In the example above, the rank is 0 . Mark the CephFS as down: Syntax Replace FS_NAME with the name of the CephFS you want to remove. Example Display the status of the CephFS to determine it has stopped: Syntax Example After some time, the MDS is no longer listed: Example Fail all MDS ranks shown in the status of step one: Syntax Replace RANK with the rank of the MDS daemons to fail. Example Remove the CephFS: Syntax Replace FS_NAME with the name of the Ceph File System you want to remove. Example Verify that the file system is removed: Syntax Example Optional: Remove the pools that were used by CephFS. On a Ceph Monitor node, list the pools: Syntax Example In the example output, cephfs_metadata and cephfs_data are the pools that were used by CephFS. Remove the metadata pool: Syntax Replace CEPH_METADATA_POOL with the pool CephFS used for metadata storage by including the pool name twice. Example Remove the data pool: Syntax Replace CEPH_DATA_POOL with the pool CephFS used for data storage by including the pool name twice. Example Additional Resources See Removing a Ceph File System Using Ansible in the Red Hat Ceph Storage File System Guide . See the Delete a pool section in the Red Hat Ceph Storage Storage Strategies Guide . 4.13. Removing a Ceph File System using Ansible You can remove a Ceph File System (CephFS) using ceph-ansible . Before doing so, consider backing up all the data and verifying that all clients have unmounted the file system locally. Warning This operation is destructive and will make the data stored on the Ceph File System permanently inaccessible. Prerequisites A running Red Hat Ceph Storage cluster. A good backup of the data. All clients have unmounted the Ceph File System. Access to the Ansible administration node. Root-level access to a Ceph Monitor node. Procedure Navigate to the /usr/share/ceph-ansible/ directory: Identify the Ceph Metadata Server (MDS) nodes by reviewing the [mdss] section in the Ansible inventory file. On the Ansible administration node, open /usr/share/ceph-ansible/hosts : Example In the example, cluster1-node5 and cluster1-node6 are the MDS nodes. Set the max_mds parameter to 1 : Syntax Example Run the shrink-mds.yml playbook, specifying the Metadata Server (MDS) to remove: Syntax Replace MDS_NODE with the Metadata Server node you want to remove. The Ansible playbook will ask you if you want to shrink the cluster. Type yes and press the enter key. Example Optional: Repeat the process for any additional MDS nodes: Syntax Replace MDS_NODE with the Metadata Server node you want to remove. The Ansible playbook will ask you if you want to shrink the cluster. Type yes and press the enter key. Example Check the status of the CephFS: Syntax Example Remove the [mdss] section and the nodes in it from the Ansible inventory file so they will not be reprovisioned as metadata servers on future runs of the site.yml or site-container.yml playbooks. Open for editing the Ansible inventory file, /usr/share/ceph-ansible/hosts : Example Remove the [mdss] section and all nodes under it. Remove the CephFS: Syntax Replace FS_NAME with the name of the Ceph File System you want to remove. Example Optional: Remove the pools that were used by CephFS. On a Ceph Monitor node, list the pools: Syntax Find the pools that were used by CephFS. Example In the example output, cephfs_metadata and cephfs_data are the pools that were used by CephFS. Remove the metadata pool: Syntax Replace CEPH_METADATA_POOL with the pool CephFS used for metadata storage by including the pool name twice. Example Remove the data pool: Syntax Replace CEPH_METADATA_POOL with the pool CephFS used for metadata storage by including the pool name twice. Example Verify the pools no longer exist: Example The cephfs_metadata and cephfs_data pools are no longer listed. Additional Resources See Removing a Ceph File System Manually in the Red Hat Ceph Storage File System Guide . See the Delete a pool section in the Red Hat Ceph Storage Storage Strategies Guide . 4.14. Setting a minimum client version You can set a minimum version of Ceph that a third-party client must be running to connect to a Red Hat Ceph Storage Ceph File System (CephFS). Set the min_compat_client parameter to prevent older clients from mounting the file system. CephFS will also automatically evict currently connected clients that use an older version than the version set with min_compat_client . The rationale for this setting is to prevent older clients which might include bugs or have incomplete feature compatibility from connecting to the cluster and disrupting other clients. For example, some older versions of CephFS clients might not release capabilities properly and cause other client requests to be handled slowly. The values of min_compat_client are based on the upstream Ceph versions. Red Hat recommends that the third-party clients use the same major upstream version as the Red Hat Ceph Storage cluster is based on. See the following table to see the upstream versions and corresponding Red Hat Ceph Storage versions. Table 4.1. min_compat_client values Value Upstream Ceph version Red Hat Ceph Storage version luminous 12.2 Red Hat Ceph Storage 3 mimic 13.2 not applicable nautilus 14.2 Red Hat Ceph Storage 4 Important If you use Red Hat Enterprise Linux 7, do not set min_compat_client to a later version than luminous because Red Hat Enterprise Linux 7 is considered a luminous client and if you use a later version, CephFS does not allow it to access the mount point. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed Procedure Set the minimum client version: Replace name with the name of the Ceph File System and release with the minimum client version. For example to restrict clients to use the nautilus upstream version at minimum on the cephfs Ceph File System: See Table 4.1, " min_compat_client values" for the full list of available values and how they correspond with Red Hat Ceph Storage versions. 4.15. Using the ceph mds fail command Use the ceph mds fail command to: Mark an MDS daemon as failed. If the daemon was active and a suitable standby daemon was available, and if the standby daemon was active after disabling the standby-replay configuration, using this command forces a failover to the standby daemon. By disabling the standby-replay daemon, this prevents new standby-replay daemons from being assigned. Restart a running MDS daemon. If the daemon was active and a suitable standby daemon was available, the "failed" daemon becomes a standby daemon. Prerequisites Installation and configuration of the Ceph MDS daemons. Procedure To fail a daemon: Syntax Where MDS_NAME is the name of the standby-replay MDS node. Example Note You can find the Ceph MDS name from the ceph fs status command. Additional Resources See the Decreasing the Number of Active MDS Daemons in the Red Hat Ceph Storage File System Guide . See the Configuring Standby Metadata Server Daemons in the Red Hat Ceph Storage File System Guide . See the Explanation of Ranks in Metadata Server Configuration in the Red Hat Ceph Storage File System Guide . 4.16. Ceph File System client evictions When a Ceph File System (CephFS) client is unresponsive or misbehaving, it might be necessary to forcibly terminate, or evict it from accessing the CephFS. Evicting a CephFS client prevents it from communicating further with Metadata Server (MDS) daemons and Ceph OSD daemons. If a CephFS client is buffering I/O to the CephFS at the time of eviction, then any un-flushed data will be lost. The CephFS client eviction process applies to all client types: FUSE mounts, kernel mounts, NFS gateways, and any process using libcephfs API library. You can evict CephFS clients automatically, if they fail to communicate promptly with the MDS daemon, or manually. Automatic Client Eviction These scenarios cause an automatic CephFS client eviction: If a CephFS client has not communicated with the active MDS daemon for over the default 300 seconds, or as set by the session_autoclose option. If the mds_cap_revoke_eviction_timeout option is set, and a CephFS client has not responded to the cap revoke messages for over the set amount of seconds. The mds_cap_revoke_eviction_timeout option is disabled by default. During MDS startup or failover, the MDS daemon goes through a reconnect phase waiting for all the CephFS clients to connect to the new MDS daemon. If any CephFS clients fails to reconnect within the default time window of 45 seconds, or as set by the mds_reconnect_timeout option. Additional Resources See the Manually evicting a Ceph File System client section in the Red Hat Ceph Storage File System Guide for more details. 4.17. Blacklist Ceph File System clients Ceph File System client blacklisting is enabled by default. When you send an eviction command to a single Metadata Server (MDS) daemon, it propagates the blacklist to the other MDS daemons. This is to prevent the CephFS client from accessing any data objects, so it is necessary to update the other CephFS clients, and MDS daemons with the latest Ceph OSD map, which includes the blacklisted client entries. An internal "osdmap epoch barrier" mechanism is used when updating the Ceph OSD map. The purpose of the barrier is to verify the CephFS clients receiving the capabilities have a sufficiently recent Ceph OSD map, before any capabilities are assigned that might allow access to the same RADOS objects, as to not race with cancelled operations, such as, from ENOSPC or blacklisted clients from evictions. If you are experiencing frequent CephFS client evictions due to slow nodes or an unreliable network, and you cannot fix the underlying issue, then you can ask the MDS to be less strict. It is possible to respond to slow CephFS clients by simply dropping their MDS sessions, but permit the CephFS client to re-open sessions and to continue talking to Ceph OSDs. By setting the mds_session_blacklist_on_timeout and mds_session_blacklist_on_evict options to false enables this mode. Note When blacklisting is disabled, the evicted CephFS client has only an effect on the MDS daemon you send the command to. On a system with multiple active MDS daemons, you would need to send an eviction command to each active daemon. 4.18. Manually evicting a Ceph File System client You might want to manually evict a Ceph File System (CephFS) client, if the client is misbehaving and you do not have access to the client node, or if a client dies, and you do not want to wait for the client session to time out. Prerequisites User access to the Ceph Monitor node. Procedure Review the client list: Syntax Exmaple Evict the specified CephFS client: Syntax Exmaple 4.19. Removing a Ceph File System client from the blacklist In some situations, it can be useful to allow a blacklisted Ceph File System (CephFS) client to reconnect to the storage cluster. Important Removing a CephFS client from the blacklist puts data integrity at risk, and does not guarantee a fully healthy, and functional CephFS client as a result. The best way to get a fully healthy CephFS client back after an eviction, is to unmount the CephFS client and do a fresh mount. If other CephFS clients are accessing files that the blacklisted CephFS client was doing buffered I/O to can result in data corruption. Prerequisites User access to the Ceph Monitor node. Procedure Review the blacklist: Exmaple Remove the CephFS client from the blacklist: Syntax Exmaple Optionally, to have FUSE-based CephFS clients trying automatically to reconnect when removing them from the blacklist. On the FUSE client, set the following option to true : 4.20. Additional Resources For details, see Chapter 3, Deployment of the Ceph File System . For details, see the Red Hat Ceph Storage Installation Guide . For details, see the Configuring Metadata Server Daemons in the Red Hat Ceph Storage File System Guide . | [
"umount MOUNT_POINT",
"umount /mnt/cephfs",
"fusermount -u MOUNT_POINT",
"fusermount -u /mnt/cephfs",
"ceph fs authorize FILE_SYSTEM_NAME client.CLIENT_NAME /DIRECTORY CAPABILITY [/DIRECTORY CAPABILITY]",
"[user@client ~]USD ceph fs authorize cephfs_a client.1 /temp rwp client.1 key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps: [mds] allow r, allow rwp path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a",
"setfattr -n ceph.dir.pin -v RANK DIRECTORY",
"[user@client ~]USD setfattr -n ceph.dir.pin -v 2 /temp",
"setfattr -n ceph.dir.pin -v -1 DIRECTORY",
"[user@client ~]USD serfattr -n ceph.dir.pin -v -1 /home/ceph-user",
"ceph osd pool create POOL_NAME PG_NUMBER",
"ceph osd pool create cephfs_data_ssd 64 pool 'cephfs_data_ssd' created",
"ceph fs add_data_pool FS_NAME POOL_NAME",
"ceph fs add_data_pool cephfs cephfs_data_ssd added data pool 6 to fsmap",
"ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]",
"getfattr -n ceph.quota.max_bytes DIRECTORY",
"getfattr -n ceph.quota.max_bytes /cephfs/",
"getfattr -n ceph.quota.max_files DIRECTORY",
"getfattr -n ceph.quota.max_files /cephfs/",
"setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir",
"setfattr -n ceph.quota.max_bytes -v 100000000 /cephfs/",
"setfattr -n ceph.quota.max_files -v 10000 /some/dir",
"setfattr -n ceph.quota.max_files -v 10000 /cephfs/",
"setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY",
"setfattr -n ceph.quota.max_bytes -v 0 /cephfs/",
"setfattr -n ceph.quota.max_files -v 0 DIRECTORY",
"setfattr -n ceph.quota.max_files -v 0 /cephfs/",
"setfattr -n ceph. TYPE .layout. FIELD -v VALUE PATH",
"setfattr -n ceph.file.layout.stripe_unit -v 1048576 test",
"getfattr -n ceph. TYPE .layout PATH",
"[root@mon ~] getfattr -n ceph.dir.layout /home/test ceph.dir.layout=\"stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data\"",
"getfattr -n ceph. TYPE .layout. FIELD _PATH",
"[root@mon ~] getfattr -n ceph.file.layout.pool test ceph.file.layout.pool=\"cephfs_data\"",
"setfattr -x ceph.dir.layout DIRECTORY_PATH",
"[user@client ~]USD setfattr -x ceph.dir.layout /home/cephfs",
"setfattr -x ceph.dir.layout.pool_namespace DIRECTORY_PATH",
"[user@client ~]USD setfattr -x ceph.dir.layout.pool_namespace /home/cephfs",
"ceph fs set FILE_SYSTEM_NAME allow_new_snaps true",
"ceph fs set cephfs allow_new_snaps true enabled new snapshots",
"mkdir NEW_DIRECTORY_PATH",
"mkdir .snap/new-snaps",
"ls -l .snap/",
"rmdir DIRECTORY_PATH",
"rmdir .snap/new-snaps",
"cp -a .snap/ SNAP_DIRECTORY / FILENAME",
"cp .snap/new-snaps/file1 .",
"rm -rf * cp -a .snap/ SNAP_DIRECTORY /* .",
"rm -rf * cp -a .snap/new-snaps/* .",
"ceph fs set FS_NAME down true",
"ceph fs set cephfs down true",
"ceph fs set FS_NAME down false",
"ceph fs set cephfs down false",
"ceph fs fail FS_NAME",
"ceph fs fail cephfs",
"ceph fs status",
"ceph fs status cephfs - 0 clients ====== +------+--------+----------------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+----------------+---------------+-------+-------+ | 0 | active | cluster1-node6 | Reqs: 0 /s | 10 | 13 | +------+--------+----------------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 2688k | 15.0G | | cephfs_data | data | 0 | 15.0G | +-----------------+----------+-------+-------+ +----------------+ | Standby MDS | +----------------+ | cluster1-node5 | +----------------+",
"ceph fs set FS_NAME down true",
"ceph fs set cephfs down true marked down",
"ceph fs status",
"ceph fs status cephfs - 0 clients ====== +------+----------+----------------+----------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+----------+----------------+----------+-------+-------+ | 0 | stopping | cluster1-node6 | | 10 | 12 | +------+----------+----------------+----------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 2688k | 15.0G | | cephfs_data | data | 0 | 15.0G | +-----------------+----------+-------+-------+ +----------------+ | Standby MDS | +----------------+ | cluster1-node5 | +----------------+",
"ceph fs status cephfs - 0 clients ====== +------+-------+-----+----------+-----+------+ | Rank | State | MDS | Activity | dns | inos | +------+-------+-----+----------+-----+------+ +------+-------+-----+----------+-----+------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 2688k | 15.0G | | cephfs_data | data | 0 | 15.0G | +-----------------+----------+-------+-------+ +----------------+ | Standby MDS | +----------------+ | cluster1-node5 | +----------------+",
"ceph mds fail RANK",
"ceph mds fail 0",
"ceph fs rm FS_NAME --yes-i-really-mean-it",
"ceph fs rm cephfs --yes-i-really-mean-it",
"ceph fs ls",
"ceph fs ls No filesystems enabled",
"ceph osd pool ls",
"ceph osd pool ls rbd cephfs_data cephfs_metadata",
"ceph osd pool delete CEPH_METADATA_POOL CEPH_METADATA_POOL --yes-i-really-really-mean-it",
"ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it pool 'cephfs_metadata' removed",
"ceph osd pool delete CEPH_DATA_POOL CEPH_DATA_POOL --yes-i-really-really-mean-it",
"ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it pool 'cephfs_data' removed",
"[admin@admin ~]USD cd /usr/share/ceph-ansible",
"[mdss] cluster1-node5 cluster1-node6",
"ceph fs set NAME max_mds NUMBER",
"ceph fs set cephfs max_mds 1",
"ansible-playbook infrastructure-playbooks/shrink-mds.yml -e mds_to_kill= MDS_NODE -i hosts",
"[admin@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/shrink-mds.yml -e mds_to_kill=cluster1-node6 -i hosts",
"ansible-playbook infrastructure-playbooks/shrink-mds.yml -e mds_to_kill= MDS_NODE -i hosts",
"[admin@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/shrink-mds.yml -e mds_to_kill=cluster1-node5 -i hosts",
"ceph fs status",
"ceph fs status cephfs - 0 clients ====== +------+--------+----------------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+----------------+---------------+-------+-------+ | 0 | failed | cluster1-node6 | Reqs: 0 /s | 10 | 13 | +------+--------+----------------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 2688k | 15.0G | | cephfs_data | data | 0 | 15.0G | +-----------------+----------+-------+-------+ +----------------+ | Standby MDS | +----------------+ | cluster1-node5 | +----------------+",
"[mdss] cluster1-node5 cluster1-node6",
"ceph fs rm FS_NAME --yes-i-really-mean-it",
"ceph fs rm cephfs --yes-i-really-mean-it",
"ceph osd pool ls",
"ceph osd pool ls rbd cephfs_data cephfs_metadata",
"ceph osd pool delete CEPH_METADATA_POOL CEPH_METADATA_POOL --yes-i-really-really-mean-it",
"ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it pool 'cephfs_metadata' removed",
"ceph osd pool delete CEPH_DATA_POOL CEPH_DATA_POOL --yes-i-really-really-mean-it",
"ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it pool 'cephfs_data' removed",
"ceph osd pool ls rbd",
"ceph fs set name min_compat_client release",
"ceph fs set cephfs min_compat_client nautilus",
"ceph mds fail MDS_NAME",
"ceph mds fail example01",
"ceph tell DAEMON_NAME client ls",
"ceph tell mds.0 client ls [ { \"id\": 4305, \"num_leases\": 0, \"num_caps\": 3, \"state\": \"open\", \"replay_requests\": 0, \"completed_requests\": 0, \"reconnecting\": false, \"inst\": \"client.4305 172.21.9.34:0/422650892\", \"client_metadata\": { \"ceph_sha1\": \"ae81e49d369875ac8b569ff3e3c456a31b8f3af5\", \"ceph_version\": \"ceph version 12.0.0-1934-gae81e49 (ae81e49d369875ac8b569ff3e3c456a31b8f3af5)\", \"entity_id\": \"0\", \"hostname\": \"senta04\", \"mount_point\": \"/tmp/tmpcMpF1b/mnt.0\", \"pid\": \"29377\", \"root\": \"/\" } } ]",
"ceph tell DAEMON_NAME client evict id= ID_NUMBER",
"ceph tell mds.0 client evict id=4305",
"ceph osd blacklist ls listed 1 entries 127.0.0.1:0/3710147553 2020-03-19 11:32:24.716146",
"ceph osd blacklist rm CLIENT_NAME_OR_IP_ADDR",
"ceph osd blacklist rm 127.0.0.1:0/3710147553 un-blacklisting 127.0.0.1:0/3710147553",
"client_reconnect_stale = true"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/ceph-file-system-administration |
Index | Index A acl mount option, Mounting a GFS2 File System adding journals to a file system, Adding Journals to a GFS2 File System atime, configuring updates, Configuring atime Updates mounting with noatime , Mount with noatime mounting with relatime , Mount with relatime C Configuration considerations, GFS2 Configuration and Operational Considerations configuration, before, Before Setting Up GFS2 D data journaling, Data Journaling debugfs, GFS2 Tracepoints and the debugfs glocks File debugfs file, Troubleshooting GFS2 Performance with the GFS2 Lock Dump disk quotas additional resources, References assigning per group, Assigning Quotas Per Group assigning per user, Assigning Quotas Per User enabling, Configuring Disk Quotas creating quota files, Creating the Quota Database Files quotacheck, running, Creating the Quota Database Files hard limit, Assigning Quotas Per User management of, Managing Disk Quotas quotacheck command, using to check, Keeping Quotas Accurate reporting, Managing Disk Quotas soft limit, Assigning Quotas Per User F features, new and changed, New and Changed Features file system adding journals, Adding Journals to a GFS2 File System atime, configuring updates, Configuring atime Updates mounting with noatime , Mount with noatime mounting with relatime , Mount with relatime data journaling, Data Journaling growing, Growing a GFS2 File System making, Creating a GFS2 File System mounting, Mounting a GFS2 File System quota management, GFS2 Quota Management , Setting Up Quotas in Enforcement or Accounting Mode synchronizing quotas, Synchronizing Quotas with the quotasync Command repairing, Repairing a GFS2 File System suspending activity, Suspending Activity on a GFS2 File System unmounting, Unmounting a GFS2 File System fsck.gfs2 command, Repairing a GFS2 File System G GFS2 atime, configuring updates, Configuring atime Updates mounting with noatime , Mount with noatime mounting with relatime , Mount with relatime Configuration considerations, GFS2 Configuration and Operational Considerations managing, Managing GFS2 Operation, GFS2 Configuration and Operational Considerations quota management, GFS2 Quota Management , Setting Up Quotas in Enforcement or Accounting Mode synchronizing quotas, Synchronizing Quotas with the quotasync Command withdraw function, The GFS2 Withdraw Function GFS2 file system maximum size, GFS2 Support Limits GFS2-specific options for adding journals table, Complete Usage GFS2-specific options for expanding file systems table, Complete Usage gfs2_grow command, Growing a GFS2 File System gfs2_jadd command, Adding Journals to a GFS2 File System glock, GFS2 Tracepoints and the debugfs glocks File glock flags, Troubleshooting GFS2 Performance with the GFS2 Lock Dump , The glock debugfs Interface glock holder flags, Troubleshooting GFS2 Performance with the GFS2 Lock Dump , Glock Holders glock types, Troubleshooting GFS2 Performance with the GFS2 Lock Dump , The glock debugfs Interface growing a file system, Growing a GFS2 File System M making a file system, Creating a GFS2 File System managing GFS2, Managing GFS2 maximum size, GFS2 file system, GFS2 Support Limits mkfs command, Creating a GFS2 File System mkfs.gfs2 command options table, Complete Options mount command, Mounting a GFS2 File System mount table, Complete Usage mounting a file system, Mounting a GFS2 File System N node locking, GFS2 Node Locking O overview, GFS2 Overview configuration, before, Before Setting Up GFS2 features, new and changed, New and Changed Features P performance tuning, Performance Tuning with GFS2 Posix locking, Issues with Posix Locking Q quota management, GFS2 Quota Management , Setting Up Quotas in Enforcement or Accounting Mode synchronizing quotas, Synchronizing Quotas with the quotasync Command quotacheck , Creating the Quota Database Files quotacheck command checking quota accuracy with, Keeping Quotas Accurate quota_quantum tunable parameter, Synchronizing Quotas with the quotasync Command R repairing a file system, Repairing a GFS2 File System S suspending activity on a file system, Suspending Activity on a GFS2 File System system hang at unmount, Unmounting a GFS2 File System T tables GFS2-specific options for adding journals, Complete Usage GFS2-specific options for expanding file systems, Complete Usage mkfs.gfs2 command options, Complete Options mount options, Complete Usage tracepoints, GFS2 Tracepoints and the debugfs glocks File tuning, performance, Performance Tuning with GFS2 U umount command, Unmounting a GFS2 File System unmount, system hang, Unmounting a GFS2 File System unmounting a file system, Unmounting a GFS2 File System W withdraw function, GFS2, The GFS2 Withdraw Function | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/ix01 |
Chapter 5. Installing a three-node cluster on Azure | Chapter 5. Installing a three-node cluster on Azure In OpenShift Container Platform version 4.18, you can install a three-node cluster on Microsoft Azure. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an Azure Marketplace image is not supported. 5.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on Azure using ARM templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 5.2. steps Installing a cluster on Azure with customizations Installing a cluster on Azure using ARM templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure/installing-azure-three-node |
Chapter 7. Updating an instance | Chapter 7. Updating an instance You can add and remove additional resources from running instances, such as persistent volume storage, a network interface, or a public IP address. You can also update instance metadata and the security groups that the instance belongs to. 7.1. Attaching a network to an instance You can attach a network to a running instance. When you attach a network to the instance, the Compute service creates the port on the network for the instance. Use a network to attach the network interface to an instance when you want to use the default security group and there is only one subnet on the network. Procedure Identify the available networks and note the name or ID of the network that you want to attach to your instance: If the network that you need is not available,create a new network: Attach the network to your instance: Replace <instance> with the name or ID of the instance that you want to attach the network to. Replace <network> with the name or ID of the network that you want to attach to the instance. Additional resources openstack network create command in the Command Line Interface Reference . Creating a network in the Networking Guide . 7.2. Detaching a network from an instance You can detach a network from an instance. Note Detaching the network detaches all network ports. If the instance has multiple ports on a network and you want to detach only one of those ports, follow the Detaching a port from an instance procedure to detach the port. Procedure Identify the network that is attached to the instance: Detach the network from the instance: Replace <instance> with the name or ID of the instance that you want to remove the network from. Replace <network> with the name or ID of the network that you want to remove from the instance. 7.3. Attaching a port to an instance You can attach a network interface to a running instance by using a port. You can attach a port to only one instance at a time. Use a port to attach the network interface to an instance when you want to use a custom security group, or when there are multiple subnets on the network. Tip If you attach the network interface by using a network, the port is created automatically. For more information, see Attaching a network to an instance . Note You cannot attach a port with an SR-IOV vNIC to an instance, or a port with a guaranteed minimum bandwidth QoS policy. Procedure Identify the available ports and note the name or ID of the port that you want to attach to your instance: If the port that you need is not available,create a new port: Replace <network> with the name or ID of the network to create the port on. Replace <port> with the name or ID of the port that you want to attach to the instance. Attach the port to your instance: Replace <instance> with the name or ID of the instance that you want to attach the port to. Replace <port> with the name or ID of the port that you want to attach to the instance. Additional resources openstack port create command in the Command Line Interface Reference . Configuring Quality of Service (QoS) policies in the Networking Guide . 7.4. Detaching a port from an instance You can detach a port from an instance. Procedure Identify the port that is attached to the instance: Detach the port from the instance: Replace <instance> with the name or ID of the instance that you want to remove the port from. Replace <port> with the name or ID of the port that you want to remove from the instance. 7.5. Attaching a volume to an instance You can attach a volume to an instance for persistent storage. You can attach a volume to only one instance at a time, unless the volume has been configured as a multiattach volume. For more information about creating multiattach-capable volumes, see Attach a volume to multiple instances . Prerequisites To attach a multiattach volume, the environment variable OS_COMPUTE_API_VERSION is set to 2.60 or later. To attach more than 26 volumes to your instance, the image you used to create the instance must have the following properties: hw_scsi_model=virtio-scsi hw_disk_bus=scsi Procedure Identify the available volumes and note the name or ID of the volume that you want to attach to your instance: Attach the volume to your instance: Replace <instance> with the name or ID of the instance that you want to attach the volume to. Replace <volume> with the name or ID of the volume that you want to attach to the instance. Note If the command returns the following error, the volume you chose to attach to the instance is multiattach, therefore you must use Compute API version 2.60 or later: You can either set the environment variable OS_COMPUTE_API_VERSION=2.72 , or include the --os-compute-api-version argument when adding the volume to the instance: Tip Specify --os-compute-api-version 2.20 or higher to add a volume to an instance with status SHELVED or SHELVED_OFFLOADED . Confirm that the volume is attached to the instance or instances: Replace <volume> with the name or ID of the volume to display. Example output: 7.6. Viewing the volumes attached to an instance You can view the volumes attached to a particular instance. Prerequisites You are using python-openstackclient 5.5.0 . Procedure List the volumes attached to an instance: 7.7. Detaching a volume from an instance You can detach a volume from an instance. Note Detaching the network detaches all network ports. If the instance has multiple ports on a network and you want to detach only one of those ports, follow the Detaching a port from an instance procedure to detach the port. Procedure Identify the volume that is attached to the instance: Detach the volume from the instance: Replace <instance> with the name or ID of the instance that you want to remove the volume from. Replace <volume> with the name or ID of the volume that you want to remove from the instance. Note Specify --os-compute-api-version 2.20 or higher to remove a volume from an instance with status SHELVED or SHELVED_OFFLOADED . | [
"(overcloud)USD openstack network list",
"(overcloud)USD openstack network create <network>",
"openstack server add network <instance> <network>",
"(overcloud)USD openstack server show <instance>",
"openstack server remove network <instance> <network>",
"(overcloud)USD openstack port list",
"(overcloud)USD openstack port create --network <network> <port>",
"openstack server add port <instance> <port>",
"(overcloud)USD openstack server show <instance>",
"openstack server remove port <instance> <port>",
"(overcloud)USD openstack volume list",
"openstack server add volume <instance> <volume>",
"Multiattach volumes are only supported starting with compute API version 2.60. (HTTP 400) (Request-ID: req-3a969c31-e360-4c79-a403-75cc6053c9e5)",
"openstack --os-compute-api-version 2.72 server add volume <instance> <volume>",
"openstack volume show <volume>",
"+-----------------------------------------------------+----------------------+---------+-----+-----------------------------------------------------------------------------------------------+ | ID | Name | Status | Size| Attached to +-----------------------------------------------------+---------------------+---------+------+---------------------------------------------------------------------------------------------+ | f3fb92f6-c77b-429f-871d-65b1e3afa750 | volMultiattach | in-use | 50 | Attached to instance1 on /dev/vdb Attached to instance2 on /dev/vdb | +-----------------------------------------------------+----------------------+---------+-----+-----------------------------------------------------------------------------------------------+",
"openstack server volume list <instance> +---------------------+----------+---------------------+-----------------------+ | ID | Device | Server ID | Volume ID | +---------------------+----------+---------------------+-----------------------+ | 1f9dcb02-9a20-4a4b- | /dev/vda | ab96b635-1e63-4487- | 1f9dcb02-9a20-4a4b-9f | | 9f25-c7846a1ce9e8 | | a85c-854197cd537b | 25-c7846a1ce9e8 | +---------------------+----------+---------------------+-----------------------+",
"(overcloud)USD openstack server show <instance>",
"openstack server remove volume <instance> <volume>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/assembly_updating-an-instance_osp |
4.336. valgrind | 4.336. valgrind 4.336.1. RHBA-2011:1651 - valgrind bug fix and enhancement update Updated valgrind packages that fix several bugs and add an enhancement are available for Red Hat Enterprise Linux 6. Valgrind is a tool to help users find memory management problems in programs. Valgrind can detect a lot of problems that are otherwise very hard to find or diagnose. Bug Fixes BZ# 708522 When building the valgrind package with macros to prevent application of any downstream patches, the rebuild process failed. This bug has been fixed and valgrind can now be properly rebuilt in the described scenario. BZ# 713956 Previously, the JIT (Just in Time Compiler) in some versions of the JDK (Java Development Kit) generated useless, but valid, instruction prefixes, which valgrind could not emulate. Consequently, Java applications running under valgrind sometimes terminated unexpectedly. With this update, valgrind has been changed to emulate instructions even with these useless prefixes, the JVM process now exits properly, and valgrind displays memory leak summary information in the described scenario. BZ# 717218 In a Coverity Scan analysis, a redundant check was discovered in one of the backported patches applied to the valgrind package. An upstream patch has been applied to address this issue and the redundant check is no longer performed. Enhancement BZ# 694598 With this update, the valgrind package has been updated to provide support for 64-bit IBM POWER7 Series hardware. Users of valgrind are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/valgrind |
Chapter 12. Configuring the vSphere connection settings after an installation | Chapter 12. Configuring the vSphere connection settings after an installation After installing an OpenShift Container Platform cluster on vSphere with the platform integration feature enabled, you might need to update the vSphere connection settings manually, depending on the installation method. For installations using the Assisted Installer, you must update the connection settings. This is because the Assisted Installer adds default connection settings to the vSphere connection configuration wizard as placeholders during the installation. For installer-provisioned or user-provisioned infrastructure installations, you should have entered valid connection settings during the installation. You can use the vSphere connection configuration wizard at any time to validate or modify the connection settings, but this is not mandatory for completing the installation. 12.1. Configuring the vSphere connection settings Modify the following vSphere configuration settings as required: vCenter address vCenter cluster vCenter username vCenter password vCenter address vSphere data center vSphere datastore Virtual machine folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to https://console.redhat.com . Procedure In the Administrator perspective, navigate to Home Overview . Under Status , click vSphere connection to open the vSphere connection configuration wizard. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui . In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed. Important This step is mandatory if you installed OpenShift Container Platform 4.13 or later. In the Username field, enter your vSphere vCenter username. In the Password field, enter your vSphere vCenter password. Warning The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter . In the Default data store field, enter the path and name of the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename . Warning Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes . In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. Click Save Configuration . This updates the cloud-provider-config ConfigMap resource in the openshift-config namespace, and starts the configuration process. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy . 12.2. Verifying the configuration The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected. Prerequisites You have saved the configuration settings in the vSphere connection configuration wizard. Procedure Check that the configuration process completed successfully: In the OpenShift Container Platform Administrator perspective, navigate to Home Overview . Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed. Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed. A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps: Create a StorageClass object using the following YAML: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate Create a PersistentVolumeClaims object using the following YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem If you are unable to create a PersistentVolumeClaims object, you can troubleshoot by navigating to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console. For instructions on creating storage objects, see Dynamic provisioning . | [
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/installing-vsphere-post-installation-configuration |
Chapter 2. Partner onboarding | Chapter 2. Partner onboarding Use the Red Hat Customer Portal to create a new account and to join the hardware certification program. If you face any issues during the certification process, you can contact us for support in any of the following ways. Create a Partner Acceleration Desk (PAD) ticket under the Product Certification category. Email the Red Hat Certification Operation (cert-ops) team at [email protected] . Contact your dedicated Ecosystem Partner Management (EPM) if you have been assigned one. 2.1. Creating a Red Hat account Procedure Open the Red Hat Customer Portal and click Register on the top-right corner of the page. The Register for a Red Hat account page displays. Note Make sure you use your company email ID and not your personal email. The email ID you enter will be used for all your communication with Red Hat going forward. Note Red Hat recommends using a unique login ID that is separate from the email ID to avoid account-related issues in the future. The login ID, once created, cannot be changed. Enter your Login information and Personal information . Select Corporate as the Account type. Enter your company's Contact information . Click Create My Account . A new Red Hat account is created. Request to enable Red Hat Partner Subscription (RHPS) on the same account. You must have an administrator organization (Org. Admins) rights to make a request. See Red Hat Partner Subscription (RHPS) for instructions. Verification Verify that an account number is assigned to your account. To do this, log in to your account at the Red Hat Customer Portal , and then click your avatar at the top-right to confirm the account details. 2.2. Join the hardware certification program Procedure Log in to the Red Hat Partner Connect portal to join the hardware certification program. Click Accept Terms and Conditions . Provide some more information about your company and product, and click Submit . Verify your email address by clicking on the link you received in your mailbox. On the Red Hat Terms and Conditions page, select all the I have read and agree to the terms check boxes, and click Submit . A message is displayed on successfully becoming a hardware partner. Select an option from the How will you use this subscription? drop-down list, and click Request partner subscription . On the Red Hat Terms and Conditions page, select all the I have read and agree to the terms check boxes, and click Submit . A message is displayed after successfully receiving a free partner subscription. A vendor profile is created and Single Sign-on (SSO) is automatically added to the vendor's users. Note Your vendor profile, once created, remains in the Red Hat database. However, your Red Hat Partner Subscription (RHPS) account becomes inactive after one year of inactivity. Therefore, if you are a returning partner, raise a request to reactivate your RHPS account before beginning hardware certification. Upon activating your RHPS account, the cert-ops team will be notified of your details, including Single Sign-on (SSO) information assigned to your account. Steps Opening a new certification case using the Red Hat Certification Tool | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/assembly_onboarding-certification-partners_hw-test-suite-introduction |
Chapter 2. Preparing to deploy multiple OpenShift Data Foundation storage clusters | Chapter 2. Preparing to deploy multiple OpenShift Data Foundation storage clusters Before you begin the deployment of OpenShift Data Foundation using dynamic, local, or external storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Things you should remember before installing multiple OpenShift Data Foundation storage clusters: openshift-storage and openshift-storage-extended are the exclusively supported namespaces. Internal storage cluster is restricted to the OpenShift Data Foundation operator namespace. External storage cluster is permissible in both operator and non-operator namespaces. Multiple storage clusters are not supported in the same namespace. Hence, the external storage system will not be visible under the OpenShift Data Foundation operator page as the operator is under openshift-storage namespace and the external storage system is not. Customers running external storage clusters in the operator namespace cannot utilize multiple storage clusters. Multicloud Object Gateway is supported solely within the operator namespace. It is ignored in other namespaces. RADOS Gateway (RGW) can be in either the operator namespace, a non-operator namespace, or both Network File System (NFS) is enabled as long as it is enabled for at least one of the clusters. Topology is enabled as long as it is enabled for at least one of the clusters. Topology domain labels are set as long as the internal cluster is present. The Topology view of the cluster is only supported for OpenShift Data Foundation internal mode deployments. Different multus settings are not supported for multiple storage clusters. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_multiple_openshift_data_foundation_storage_clusters/preparing-to-deploy-multiple-odf-storage-clusters_rhodf |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/4.15_release_notes/making-open-source-more-inclusive |
Chapter 1. Overview | Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 3, Block pools provides you with information on how to create, update and delete block pools. Chapter 4, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 6, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 8, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 9, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 10, Reclaiming space on target volumes shows you how to reclaim the actual available storage space. Chapter 11, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 12, Volume cloning shows you how to create volume clones. Chapter 13, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_and_allocating_storage_resources/overview |
5.5. Creating a Mirrored LVM Logical Volume in a Cluster | 5.5. Creating a Mirrored LVM Logical Volume in a Cluster Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume on a single node with a segment type of mirror . However, in order to create a mirrored LVM volume in a cluster: The cluster and cluster mirror infrastructure must be running The cluster must be quorate The locking type in the lvm.conf file must be set correctly to enable cluster locking and the use_lvmetad setting should be 0. Note, however, that in Red Hat Enterprise Linux 7 the ocf:heartbeat:clvm Pacemaker resource agent itself, as part of the start procedure, performs these tasks. In Red Hat Enterprise Linux 7, clusters are managed through Pacemaker. Clustered LVM logical volumes are supported only in conjunction with Pacemaker clusters, and must be configured as cluster resources. The following procedure creates a mirrored LVM volume in a cluster. Install the cluster software and LVM packages, start the cluster software, and create the cluster. You must configure fencing for the cluster. The document High Availability Add-On Administration provides a sample procedure for creating a cluster and configuring fencing for the nodes in the cluster. The document High Availability Add-On Reference provides more detailed information about the components of cluster configuration. In order to create a mirrored logical volume that is shared by all of the nodes in a cluster, the locking type must be set correctly in the lvm.conf file in every node of the cluster. By default, the locking type is set to local. To change this, execute the following command in each node of the cluster to enable clustered locking: Set up a dlm resource for the cluster. You create the resource as a cloned resource so that it will run on every node in the cluster. Configure clvmd as a cluster resource. Just as for the dlm resource, you create the resource as a cloned resource so that it will run on every node in the cluster. Note that you must set the with_cmirrord=true parameter to enable the cmirrord daemon on all of the nodes that clvmd runs on. If you have already configured a clvmd resource but did not specify the with_cmirrord=true parameter, you can update the resource to include the parameter with the following command. Set up clvmd and dlm dependency and start up order. clvmd must start after dlm and must run on the same node as dlm . Create the mirror. The first step is creating the physical volumes. The following commands create three physical volumes. Two of the physical volumes will be used for the legs of the mirror, and the third physical volume will contain the mirror log. Create the volume group. This example creates a volume group vg001 that consists of the three physical volumes that were created in the step. Note that the output of the vgcreate command indicates that the volume group is clustered. You can verify that a volume group is clustered with the vgs command, which will show the volume group's attributes. If a volume group is clustered, it will show a c attribute. Create the mirrored logical volume. This example creates the logical volume mirrorlv from the volume group vg001 . This volume has one mirror leg. This example specifies which extents of the physical volume will be used for the logical volume. You can use the lvs command to display the progress of the mirror creation. The following example shows that the mirror is 47% synced, then 91% synced, then 100% synced when the mirror is complete. The completion of the mirror is noted in the system log: You can use the lvs command with the -o +devices options to display the configuration of the mirror, including which devices make up the mirror legs. You can see that the logical volume in this example is composed of two linear images and one log. You can use the seg_pe_ranges option of the lvs to display the data layout. You can use this option to verify that your layout is properly redundant. The output of this command displays PE ranges in the same format that the lvcreate and lvresize commands take as input. Note For information on recovering from the failure of one of the legs of an LVM mirrored volume, see Section 6.2, "Recovering from LVM Mirror Failure" . | [
"/sbin/lvmconf --enable-cluster",
"pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true",
"pcs resource create clvmd ocf:heartbeat:clvm with_cmirrord=true op monitor interval=30s on-fail=fence clone interleave=true ordered=true",
"pcs resource update clvmd with_cmirrord=true",
"pcs constraint order start dlm-clone then clvmd-clone pcs constraint colocation add clvmd-clone with dlm-clone",
"pvcreate /dev/sdb1 Physical volume \"/dev/sdb1\" successfully created pvcreate /dev/sdc1 Physical volume \"/dev/sdc1\" successfully created pvcreate /dev/sdd1 Physical volume \"/dev/sdd1\" successfully created",
"vgcreate vg001 /dev/sdb1 /dev/sdc1 /dev/sdd1 Clustered volume group \"vg001\" successfully created",
"vgs vg001 VG #PV #LV #SN Attr VSize VFree vg001 3 0 0 wz--nc 68.97G 68.97G",
"lvcreate --type mirror -l 1000 -m 1 vg001 -n mirrorlv /dev/sdb1:1-1000 /dev/sdc1:1-1000 /dev/sdd1:0 Logical volume \"mirrorlv\" created",
"lvs vg001/mirrorlv LV VG Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv vg001 mwi-a- 3.91G vg001_mlog 47.00 lvs vg001/mirrorlv LV VG Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv vg001 mwi-a- 3.91G vg001_mlog 91.00 lvs vg001/mirrorlv LV VG Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv vg001 mwi-a- 3.91G vg001_mlog 100.00",
"May 10 14:52:52 doc-07 [19402]: Monitoring mirror device vg001-mirrorlv for events May 10 14:55:00 doc-07 lvm[19402]: vg001-mirrorlv is now in-sync",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices mirrorlv vg001 mwi-a- 3.91G mirrorlv_mlog 100.00 mirrorlv_mimage_0(0),mirrorlv_mimage_1(0) [mirrorlv_mimage_0] vg001 iwi-ao 3.91G /dev/sdb1(1) [mirrorlv_mimage_1] vg001 iwi-ao 3.91G /dev/sdc1(1) [mirrorlv_mlog] vg001 lwi-ao 4.00M /dev/sdd1(0)",
"lvs -a -o +seg_pe_ranges --segments PE Ranges mirrorlv_mimage_0:0-999 mirrorlv_mimage_1:0-999 /dev/sdb1:1-1000 /dev/sdc1:1-1000 /dev/sdd1:0-0"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/mirvol_create_ex |
23.4. Configuration Examples | 23.4. Configuration Examples 23.4.1. SpamAssassin and Postfix SpamAssasin is an open-source mail filter that provides a way to filter unsolicited email (spam messages) from incoming email. [23] When using Red Hat Enterprise Linux, the spamassassin package provides SpamAssassin. Enter the following command to see if the spamassassin package is installed: If it is not installed, use the yum utility as root to install it: SpamAssassin operates in tandem with a mailer such as Postfix to provide spam-filtering capabilities. In order for SpamAssassin to effectively intercept, analyze and filter mail, it must listen on a network interface. The default port for SpamAssassin is TCP/783, however this can be changed. The following example provides a real-world demonstration of how SELinux complements SpamAssassin by only allowing it access to a certain port by default. This example will then demonstrate how to change the port and have SpamAssassin operate on a non-default port. Note that this is an example only and demonstrates how SELinux can affect a simple configuration of SpamAssassin. Comprehensive documentation of SpamAssassin is beyond the scope of this document. See the official SpamAssassin documentation for further details. This example assumes the spamassassin is installed, that any firewall has been configured to allow access on the ports in use, that the SELinux targeted policy is used, and that SELinux is running in enforcing mode: Procedure 23.1. Running SpamAssassin on a non-default port Use the semanage utility as root to show the port that SELinux allows the spamd daemon to listen on by default: This output shows that TCP/783 is defined in spamd_port_t as the port for SpamAssassin to operate on. Edit the /etc/sysconfig/spamassassin configuration file and modify it so that it will start SpamAssassin on the example port TCP/10000: This line now specifies that SpamAssassin will operate on port 10000. The rest of this example will show how to modify the SELinux policy to allow this socket to be opened. Start SpamAssassin and an error message similar to the following will appear: This output means that SELinux has blocked access to this port. A denial message similar to the following will be logged by SELinux: As root, run semanage to modify the SELinux policy in order to allow SpamAssassin to operate on the example port (TCP/10000): Confirm that SpamAssassin will now start and is operating on TCP port 10000: At this point, spamd is properly operating on TCP port 10000 as it has been allowed access to that port by the SELinux policy. [23] For more information, see the Spam Filters section in the System Administrator's Guide . | [
"~]USD rpm -q spamassassin package spamassassin is not installed",
"~]# yum install spamassassin",
"~]# semanage port -l | grep spamd spamd_port_t tcp 783",
"Options to spamd SPAMDOPTIONS=\"-d -p 10000 -c m5 -H\"",
"~]# systemctl start spamassassin.service Job for spamassassin.service failed. See 'systemctl status spamassassin.service' and 'journalctl -xn' for details.",
"SELinux is preventing the spamd (spamd_t) from binding to port 10000.",
"~]# semanage port -a -t spamd_port_t -p tcp 10000",
"~]# systemctl start spamassassin.service ~]# netstat -lnp | grep 10000 tcp 0 0 127.0.0.1:10000 0.0.0.0:* LISTEN 2224/spamd.pid"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-postfix-configuration_examples |
Chapter 2. Managing user accounts using the command line | Chapter 2. Managing user accounts using the command line There are several stages in the user life cycle in IdM (Identity Management), including the following: Create user accounts Activate stage user accounts Preserve user accounts Delete active, stage, or preserved user accounts Restore preserved user accounts 2.1. User life cycle Identity Management (IdM) supports three user account states: Stage users are not allowed to authenticate. This is an initial state. Some of the user account properties required for active users cannot be set, for example, group membership. Active users are allowed to authenticate. All required user account properties must be set in this state. Preserved users are former active users that are considered inactive and cannot authenticate to IdM. Preserved users retain most of the account properties they had as active users, but they are not part of any user groups. You can delete user entries permanently from the IdM database. Important Deleted user accounts cannot be restored. When you delete a user account, all the information associated with the account is permanently lost. A new administrator can only be created by a user with administrator rights, such as the default admin user. If you accidentally delete all administrator accounts, the Directory Manager must create a new administrator manually in the Directory Server. Warning Do not delete the admin user. As admin is a pre-defined user required by IdM, this operation causes problems with certain commands. If you want to define and use an alternative admin user, disable the pre-defined admin user with ipa user-disable admin after you granted admin permissions to at least one different user. Warning Do not add local users to IdM. The Name Service Switch (NSS) always resolves IdM users and groups before resolving local users and groups. This means that, for example, IdM group membership does not work for local users. 2.2. Adding users using the command line You can add users as: Active - user accounts which can be actively used by their users. Stage - users cannot use these accounts. Create stage users if you want to prepare new user accounts. When users are ready to use their accounts, then you can activate them. The following procedure describes adding active users to the IdM server with the ipa user-add command. Similarly, you can create stage user accounts with the ipa stageuser-add command. Warning IdM automatically assigns a unique user ID (UID) to new user accounts. You can assign a UID manually by using the --uid=INT option with the ipa user-add command, but the server does not validate whether the UID number is unique. Consequently, multiple user entries might have the same UID number. A similar problem can occur with user private group IDs (GIDs) if you assign a GID to a user account manually by using the --gidnumber=INT option. To check if you have multiple user entries with the same ID, enter ipa user-find --uid=<uid> or ipa user-find --gidnumber=<gidnumber> . Red Hat recommends you do not have multiple entries with the same UIDs or GIDs. If you have objects with duplicate IDs, security identifiers (SIDs) are not generated correctly. SIDs are crucial for trusts between IdM and Active Directory and for Kerberos authentication to work correctly. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Add user login, user's first name, last name and optionally, you can also add their email address. IdM supports user names that can be described by the following regular expression: Note User names ending with the trailing dollar sign (USD) are supported to enable Samba 3.x machine support. If you add a user name containing uppercase characters, IdM automatically converts the name to lowercase when saving it. Therefore, IdM always requires to enter user names in lowercase when logging in. Additionally, it is not possible to add user names which differ only in letter casing, such as user and User . The default maximum length for user names is 32 characters. To change it, use the ipa config-mod --maxusername command. For example, to increase the maximum user name length to 64 characters: The ipa user-add command includes a lot of parameters. To list them all, use the ipa help command: For details about ipa help command, see What is the IPA help . You can verify if the new user account is successfully created by listing all IdM user accounts: This command lists all user accounts with details. Additional resources Strengthening Kerberos security with PAC information Are user/group collisions supported in Red Hat Enterprise Linux? (Red Hat Knowledgebase) Users without SIDs cannot log in to IdM after an upgrade 2.3. Activating users using the command line To activate a user account by moving it from stage to active, use the ipa stageuser-activate command. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Activate the user account with the following command: You can verify if the new user account is successfully created by listing all IdM user accounts: This command lists all user accounts with details. 2.4. Preserving users using the command line You can preserve a user account if you want to remove it, but keep the option to restore it later. To preserve a user account, use the --preserve option with the ipa user-del or ipa stageuser-del commands. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Preserve the user account with the following command: Note Despite the output saying the user account was deleted, it has been preserved. 2.5. Deleting users using the command line IdM (Identity Management) enables you to delete users permanently. You can delete: Active users with the following command: ipa user-del Stage users with the following command: ipa stageuser-del Preserved users with the following command: ipa user-del When deleting multiple users, use the --continue option to force the command to continue regardless of errors. A summary of the successful and failed operations is printed to the stdout standard output stream when the command completes. If you do not use --continue , the command proceeds with deleting users until it encounters an error, after which it stops and exits. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Delete the user account with the following command: The user account has been permanently deleted from IdM. 2.6. Restoring users using the command line You can restore a preserved users to: Active users: ipa user-undel Stage users: ipa user-stage Restoring a user account does not restore all of the account's attributes. For example, the user's password is not restored and must be set again. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Activate the user account with the following command: Alternatively, you can restore user accounts as staged: Verification You can verify if the new user account is successfully created by listing all IdM user accounts: This command lists all user accounts with details. | [
"ipa user-add user_login --first=first_name --last=last_name --email=email_address",
"[a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,252}[a-zA-Z0-9_.USD-]?",
"ipa config-mod --maxusername=64 Maximum username length: 64",
"ipa help user-add",
"ipa user-find",
"ipa stageuser-activate user_login ------------------------- Stage user user_login activated -------------------------",
"ipa user-find",
"ipa user-del --preserve user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-del --continue user1 user2 user3",
"ipa user-del user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-undel user_login ------------------------------ Undeleted user account \"user_login\" ------------------------------",
"ipa user-stage user_login ------------------------------ Staged user account \"user_login\" ------------------------------",
"ipa user-find"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-accounts-using-the-command-line_managing-users-groups-hosts |
Chapter 3. Running an Ansible playbook from Satellite | Chapter 3. Running an Ansible playbook from Satellite You can run an Ansible playbook on a host or host group by executing a remote job in Satellite. Limitation of host parameters in Ansible playbook job templates When you execute an Ansible playbook on multiple hosts, Satellite renders the playbook for all hosts in the batch, but only uses the rendered playbook of the first host to execute it on all hosts in the batch. Therefore, you cannot modify the behavior of the playbook per host by using a host parameter in the template control flow constructs. Host parameters are translated into Ansible variables, so you can use them to control the behavior in native Ansible constructs. For more information, see BZ#2282275 . Prerequisites Ansible plugin in Satellite is enabled. Remote job execution is configured. For more information, see Chapter 4, Configuring and setting up remote jobs . You have an Ansible playbook ready to use. Procedure In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . In Job category , select Ansible Playbook . In Job template , select Ansible - Run playbook . Click . Select the hosts on which you want to run the playbook. In the playbook field, paste the content of your Ansible playbook. Follow the wizard to complete setting the remote job. For more information, see Section 4.21, "Executing a remote job" . Click Submit to run the Ansible playbook on your hosts. Additional resources Alternatively, you can import Ansible playbooks from Capsule Servers. For more information, see the following resources: Section 4.7, "Importing an Ansible playbook by name" Section 4.8, "Importing all available Ansible playbooks" | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_ansible_integration/running-an-ansible-playbook-from-satellite_ansible |
Chapter 1. Release Schedule | Chapter 1. Release Schedule The following table lists the dates of each Red Hat Virtualization 4.3 release: Table 1.1. Red Hat Virtualization release schedule Release Date Red Hat Virtualization 4.3 General Availability (ovirt-4.3.3) 2019-05-08 Red Hat Virtualization 4.3 Batch Update 1 (ovirt-4.3.4) 2019-06-20 Red Hat Virtualization 4.3 Batch Update 2 (ovirt-4.3.5) 2019-08-12 Red Hat Virtualization 4.3 Batch Update 3 (ovirt-4.3.6) 2019-10-10 Red Hat Virtualization 4.3 Batch Update 4 (ovirt-4.3.7) 2019-12-12 Red Hat Virtualization 4.3 Batch Update 5 (ovirt-4.3.8) 2020-02-13 Red Hat Virtualization 4.3 Batch Update 6 (ovirt-4.3.9) 2020-04-02 Red Hat Virtualization 4.3 Batch Update 7 (ovirt-4.3.10) 2020-06-04 Red Hat Virtualization 4.3 Batch Update 8 (ovirt-4.3.11) 2020-09-30 Red Hat Virtualization 4.3 Batch Update 9 (ovirt-4.3.12) 2020-11-17 | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/package_manifest/release_schedule |
Chapter 7. File Systems | Chapter 7. File Systems gfs2-utils rebase to version 3.1.8 The gfs2-utils package has been rebased to version 3.1.8, which provides important fixes and a number of enhancements: * The performance of the fsck.gfs2 , mkfs.gfs2 , and gfs2_edit utilities has been improved. * The fsck.gfs2 utility now performs better checking of journals, the jindex, system inodes, and the inode 'goal' values. * The gfs2_jadd and gfs2_grow utilities are now separate programs instead of symlinks to mkfs.gfs2 . * The test suite and related documentation have been improved. * The package no longer depends on Perl. GFS2 now prevents users from exceeding their quotas Previously, GFS2 only checked quota violations after the completion of operations, which could result in users or groups exceeding their allotted quotas. This behavior has been fixed, and GFS2 now predicts how many blocks an operation would allocate and checks if allocating them would violate quotas. Operations that would result in quota violations are disallowed, and users thus never exceed their allotted quotas. XFS rebase to version 4.1 XFS has been upgraded to upstream version 4.1 including minor bug fixes, refactorings, reworks of certain internal mechanisms, such as logging, pcpu accounting, and new mmap locking. On top of the upstream changes, this update extends the rename() function to add cross-rename (a symmetric variant of rename()) and whiteout handling. cifs rebase to version 3.17 The CIFS module has been upgraded to upstream version 3.17, which provides various minor fixes and new features for Server Message Block (SMB) 2 and 3: SMB version 2.0, 2.1, 3.0, and 3.0.2. Note that using the Linux kernel CIFS module with SMB protocol 3.1.1 is currently experimental and the functionality is unavailable in kernels provided by Red Hat. In addition, features introduced in SMB version 3.0.2 are defined as optional and are not currently supported by Red Hat Enterprise Linux. Changes in NFS in Red Hat Enterprise Linux 7.2 Fallocate support allows preallocation of files on the server. The SEEK_HOLE and SEEK_DATA extensions to the fseek() function make it possible to locate holes or data quickly and efficiently. Red Hat Enterprise Linux 7.2 also adds support for flexible file layout on NFSv4 clients described in the Technology Previews section. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/file_systems |
Chapter 5. Managing alerts | Chapter 5. Managing alerts In OpenShift Container Platform 4.7, the Alerting UI enables you to manage alerts, silences, and alerting rules. Alerting rules . Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Alerts . An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an OpenShift Container Platform cluster. Silences . A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in with cluster-administrator privileges, all alerts, silences, and alerting rules are accessible. 5.1. Accessing the Alerting UI in the Administrator and Developer perspectives The Alerting UI is accessible through the Administrator perspective and the Developer perspective in the OpenShift Container Platform web console. In the Administrator perspective, select Monitoring Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting Rules pages. In the Developer perspective, select Monitoring <project_name> Alerts . In this perspective, alerts, silences, and alerting rules are all managed from the Alerts page. The results shown in the Alerts page are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you do not have cluster-admin privileges. 5.2. Searching and filtering alerts, silences, and alerting rules You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. Understanding alert filters In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option: Alert State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert will continue to fire as long as the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications will not be sent for alerts that match all the listed values or regular expressions. Severity filters: Critical . The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team. Warning . The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review. Info . The alert is provided for informational purposes only. None . The alert has no defined severity. You can also create custom severity definitions for alerts relating to user-defined projects. Source filters: Platform . Platform-level alerts relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled post-installation to provide observability into your own workloads. Understanding silence filters In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default OpenShift Container Platform and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option: Silence State filters: Active . The silence is active and the alert will be muted until the silence is expired. Pending . The silence has been scheduled and it is not yet active. Expired . The silence has expired and notifications will be sent if the conditions for an alert are true. Understanding alerting rule filters In the Administrator perspective, the Alerting Rules page in the Alerting UI provides details about alerting rules relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option: Alert State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert will continue to fire as long as the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications will not be sent for alerts that match all the listed values or regular expressions. Not Firing . The alert is not firing. Severity filters: Critical . The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team. Warning . The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review. Info . The alerting rule provides informational alerts only. None . The alerting rule has no defined severity. You can also create custom severity definitions for alerting rules relating to user-defined projects. Source filters: Platform . Platform-level alerting rules relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled post-installation to provide observability into your own workloads. Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective. 5.3. Getting information about alerts, silences, and alerting rules The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. Procedure To obtain information about alerts in the Administrator perspective : Open the OpenShift Container Platform web console and navigate to the Monitoring Alerting Alerts page. Optional: Search for alerts by name using the Name field in the search list. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerts by clicking one or more of the Name , Severity , State , and Source column headers. Select the name of an alert to navigate to its Alert Details page. The page includes a graph that illustrates alert time series data. It also provides information about the alert, including: A description of the alert Messages associated with the alerts Labels attached to the alert A link to its governing alerting rule Silences for the alert, if any exist To obtain information about silences in the Administrator perspective : Navigate to the Monitoring Alerting Silences page. Optional: Filter the silences by name using the Search by name field. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied. Optional: Sort the silences by clicking one or more of the Name , Firing Alerts , and State column headers. Select the name of a silence to navigate to its Silence Details page. The page includes the following details: Alert specification Start time End time Silence state Number and list of firing alerts To obtain information about alerting rules in the Administrator perspective : Navigate to the Monitoring Alerting Alerting Rules page. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerting rules by clicking one or more of the Name , Severity , Alert State , and Source column headers. Select the name of an alerting rule to navigate to its Alerting Rule Details page. The page provides the following details about the alerting rule: Alerting rule name, severity, and description The expression that defines the condition for firing the alert The time for which the condition should be true for an alert to fire A graph for each alert governed by the alerting rule, showing the value with which the alert is firing A table of all alerts governed by the alerting rule To obtain information about alerts, silences, and alerting rules in the Developer perspective : Navigate to the Monitoring <project_name> Alerts page. View details for an alert, silence, or an alerting rule: Alert Details can be viewed by selecting > to the left of an alert name and then selecting the alert in the list. Silence Details can be viewed by selecting a silence in the Silenced By section of the Alert Details page. The Silence Details page includes the following information: Alert specification Start time End time Silence state Number and list of firing alerts Alerting Rule Details can be viewed by selecting View Alerting Rule in the menu on the right of an alert in the Alerts page. Note Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective. 5.4. Managing alerting rules OpenShift Container Platform monitoring ships with a set of default alerting rules. As a cluster administrator, you can view the default alerting rules. In OpenShift Container Platform 4.7, you can create, view, edit, and remove alerting rules in user-defined projects. Alerting rule considerations The default alerting rules are used specifically for the OpenShift Container Platform cluster. Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both. Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing. 5.4.1. Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: Minimize the number of alerting rules that you create for your project . Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you. Create alerting rules for symptoms instead of causes . Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed. Plan before you write your alerting rules . Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom. Provide clear alert messaging . State the symptom and recommended actions in the alert message. Include severity levels in your alerting rules . The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team. Optimize alert routing . Deploy an alerting rule directly on the Prometheus instance in the openshift-user-workload-monitoring project if the rule does not query default OpenShift Container Platform metrics. This reduces latency for alerting rules and minimizes the load on monitoring components. Warning Default OpenShift Container Platform metrics for user-defined projects provide information about CPU and memory usage, bandwidth statistics, and packet rate information. Those metrics cannot be included in an alerting rule if you route the rule directly to the Prometheus instance in the openshift-user-workload-monitoring project. Alerting rule optimization should be used only if you have read the documentation and have a comprehensive understanding of the monitoring architecture. Additional resources See the Prometheus alerting documentation for further guidelines on optimizing alerts See Monitoring overview for details about OpenShift Container Platform 4.7 monitoring architecture 5.4.2. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will fire alerts based on the values of chosen metrics. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. For example: Note When you create an alerting rule, a project label is enforced on it if a rule with the same name exists in another project. apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert expr: version{job="prometheus-example-app"} == 0 This configuration creates an alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 . Important A user-defined alerting rule can include metrics for its own project and cluster metrics. You cannot include metrics for another user-defined project. For example, an alerting rule for the user-defined project ns1 can have metrics from ns1 and cluster metrics, such as the CPU and memory metrics. However, the rule cannot include metrics from ns2 . Additionally, you cannot create alerting rules for the openshift-* core OpenShift projects. OpenShift Container Platform monitoring by default provides a set of alerting rules for these projects. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml It takes some time to create the alerting rule. 5.4.3. Reducing latency for alerting rules that do not query platform metrics If an alerting rule for a user-defined project does not query default cluster metrics, you can deploy the rule directly on the Prometheus instance in the openshift-user-workload-monitoring project. This reduces latency for alerting rules by bypassing Thanos Ruler when it is not required. This also helps to minimize the overall load on monitoring components. Warning Default OpenShift Container Platform metrics for user-defined projects provide information about CPU and memory usage, bandwidth statistics, and packet rate information. Those metrics cannot be included in an alerting rule if you deploy the rule directly to the Prometheus instance in the openshift-user-workload-monitoring project. The procedure outlined in this section should only be used if you have read the documentation and have a comprehensive understanding of the monitoring architecture. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file that includes a label with the key openshift.io/prometheus-rule-evaluation-scope and value leaf-prometheus . For example: apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 labels: openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus spec: groups: - name: example rules: - alert: VersionAlert expr: version{job="prometheus-example-app"} == 0 If that label is present, the alerting rule is deployed on the Prometheus instance in the openshift-user-workload-monitoring project. If the label is not present, the alerting rule is deployed to Thanos Ruler. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml It takes some time to create the alerting rule. See Monitoring overview for details about OpenShift Container Platform 4.7 monitoring architecture. 5.4.4. Accessing alerting rules for user-defined projects To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view role for the project. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-view role for your project. You have installed the OpenShift CLI ( oc ). Procedure You can list alerting rules in <project> : USD oc -n <project> get prometheusrule To list the configuration of an alerting rule, run the following: USD oc -n <project> get prometheusrule <rule> -o yaml 5.4.5. Listing alerting rules for all projects in a single view As a cluster administrator, you can list alerting rules for core OpenShift Container Platform and user-defined projects together in a single view. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Monitoring Alerting Alerting Rules . Select the Platform and User sources in the Filter drop-down menu. Note The Platform source is selected by default. 5.4.6. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> Additional resources See the Alertmanager documentation 5.5. Managing silences You can create a silence to stop receiving notifications about an alert when it is firing. It might be useful to silence an alert after being first notified, while you resolve the underlying issue. When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires. You can view, edit, and expire existing silences. 5.5.1. Silencing alerts You can either silence a specific alert or silence alerts that match a specification that you define. Prerequisites You have access to the cluster as a developer or as a user with edit permissions for the project that you are viewing metrics for. Procedure To silence a specific alert: In the Administrator perspective: Navigate to the Monitoring Alerting Alerts page of the OpenShift Container Platform web console. For the alert that you want to silence, select the in the right-hand column and select Silence Alert . The Silence Alert form will appear with a pre-populated specification for the chosen alert. Optional: Modify the silence. You must add a comment before creating the silence. To create the silence, select Silence . In the Developer perspective: Navigate to the Monitoring <project_name> Alerts page in the OpenShift Container Platform web console. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select Silence Alert . The Silence Alert form will appear with a prepopulated specification for the chosen alert. Optional: Modify the silence. You must add a comment before creating the silence. To create the silence, select Silence . To silence a set of alerts by creating an alert specification in the Administrator perspective: Navigate to the Monitoring Alerting Silences page in the OpenShift Container Platform web console. Select Create Silence . Set the schedule, duration, and label details for an alert in the Create Silence form. You must also add a comment for the silence. To create silences for alerts that match the label sectors that you entered in the step, select Silence . 5.5.2. Editing silences You can edit a silence, which will expire the existing silence and create a new one with the changed configuration. Procedure To edit a silence in the Administrator perspective: Navigate to the Monitoring Alerting Silences page. For the silence you want to modify, select the in the last column and choose Edit silence . Alternatively, you can select Actions Edit Silence in the Silence Details page for a silence. In the Edit Silence page, enter your changes and select Silence . This will expire the existing silence and create one with the chosen configuration. To edit a silence in the Developer perspective: Navigate to the Monitoring <project_name> Alerts page. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence. Select the name of a silence to navigate to its Silence Details page. Select Actions Edit Silence in the Silence Details page for a silence. In the Edit Silence page, enter your changes and select Silence . This will expire the existing silence and create one with the chosen configuration. 5.5.3. Expiring silences You can expire a silence. Expiring a silence deactivates it forever. Procedure To expire a silence in the Administrator perspective: Navigate to the Monitoring Alerting Silences page. For the silence you want to modify, select the in the last column and choose Expire silence . Alternatively, you can select Actions Expire Silence in the Silence Details page for a silence. To expire a silence in the Developer perspective: Navigate to the Monitoring <project_name> Alerts page. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence. Select the name of a silence to navigate to its Silence Details page. Select Actions Expire Silence in the Silence Details page for a silence. 5.6. Sending notifications to external systems In OpenShift Container Platform 4.7, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 5.6.1. Configuring alert receivers You can configure alert receivers to ensure that you learn about important issues with your cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster Settings Global Configuration Alertmanager . Note Alternatively, you can navigate to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. Select Create Receiver in the Receivers section of the page. In the Create Receiver form, add a Receiver Name and choose a Receiver Type from the list. Edit the receiver configuration: For PagerDuty receivers: Choose an integration type and add a PagerDuty integration key. Add the URL of your PagerDuty installation. Select Show advanced configuration if you want to edit the client and incident details or the severity specification. For webhook receivers: Add the endpoint to send HTTP POST requests to. Select Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver. For email receivers: Add the email address to send notifications to. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details. Choose whether TLS is required. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration. For Slack receivers: Add the URL of the Slack webhook. Add the Slack channel or user name to send notifications to. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames. By default, firing alerts with labels that match all of the selectors will be sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver: Add routing label names and values in the Routing Labels section of the form. Select Regular Expression if want to use a regular expression. Select Add Label to add further routing labels. Select Create to create the receiver. 5.7. Applying a custom Alertmanager configuration You can overwrite the default Alertmanager configuration by editing the alertmanager-main secret inside the openshift-monitoring project. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To change the Alertmanager configuration from the CLI: Print the currently active Alertmanager configuration into file alertmanager.yaml : USD oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - match: alertname: Watchdog repeat_interval: 5m receiver: watchdog - match: service: <your_service> 1 routes: - match: <your_matching_rules> 2 receiver: <receiver> 3 receivers: - name: default - name: watchdog - name: <receiver> # <receiver_configuration> 1 service specifies the service that fires the alerts. 2 <your_matching_rules> specifies the target alerts. 3 receiver specifies the receiver to use for the alert. The following Alertmanager configuration example configures PagerDuty as an alert receiver: global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - match: alertname: Watchdog repeat_interval: 5m receiver: watchdog - match: service: example-app routes: - match: severity: critical receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: " your-key " With this configuration, alerts of critical severity that are fired by the example-app service are sent using the team-frontend-page receiver. Typically these types of alerts would be paged to an individual or a critical response team. Apply the new configuration in the file: USD oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=- To change the Alertmanager configuration from the OpenShift Container Platform web console: Navigate to the Administration Cluster Settings Global Configuration Alertmanager YAML page of the web console. Modify the YAML configuration file. Select Save . Additional resources See the PagerDuty official site for more information on PagerDuty See the PagerDuty Prometheus Integration Guide to learn how to retrieve the service_key See Alertmanager configuration for configuring alerting through different alert receivers 5.8. steps Reviewing monitoring dashboards | [
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0",
"oc apply -f example-app-alerting-rule.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 labels: openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0",
"oc apply -f example-app-alerting-rule.yaml",
"oc -n <project> get prometheusrule",
"oc -n <project> get prometheusrule <rule> -o yaml",
"oc -n <namespace> delete prometheusrule <foo>",
"oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - match: alertname: Watchdog repeat_interval: 5m receiver: watchdog - match: service: <your_service> 1 routes: - match: <your_matching_rules> 2 receiver: <receiver> 3 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration>",
"global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - match: alertname: Watchdog repeat_interval: 5m receiver: watchdog - match: service: example-app routes: - match: severity: critical receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \" your-key \"",
"oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/monitoring/managing-alerts |
Chapter 14. Using bound service account tokens | Chapter 14. Using bound service account tokens You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as AWS IAM. 14.1. About bound service account tokens You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. 14.2. Configuring bound service account tokens using volume projection You can configure pods to request bound service account tokens by using volume projection. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Optional: Set the service account issuer. This step is typically not required if the bound tokens are used only within the cluster. Important If you change the service account issuer to a custom one, the service account issuer is still trusted for the 24 hours. You can force all holders to request a new bound token either by manually restarting all pods in the cluster or by performing a rolling node restart. Before performing either action, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes. Edit the cluster Authentication object: USD oc edit authentications cluster Set the spec.serviceAccountIssuer field to the desired service account issuer value: spec: serviceAccountIssuer: https://test.default.svc 1 1 This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is https://kubernetes.default.svc . Save the file to apply the changes. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 Optional: Force the holder to request a new bound token either by performing a rolling node restart or by manually restarting all pods in the cluster. Perform a rolling node restart: Warning It is not recommended to perform a rolling node restart if you have custom workloads running on your cluster, because it can cause a service interruption. Instead, manually restart all pods in the cluster. Restart nodes sequentially. Wait for the node to become fully available before restarting the node. See Rebooting a node gracefully for instructions on how to drain, restart, and mark a node as schedulable again. Manually restart all pods in the cluster: Warning Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. Run the following command: USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Configure a pod to use a bound service account token by using volume projection. Create a file called pod-projected-svc-token.yaml with the following contents: apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4 1 A reference to an existing service account. 2 The path relative to the mount point of the file to project the token into. 3 Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. 4 Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. Note In order to prevent unexpected failure, OpenShift Container Platform overrides the expirationSeconds value to be one year from the initial token generation with the --service-account-extend-token-expiration default of true . You cannot change this setting. Create the pod: USD oc create -f pod-projected-svc-token.yaml The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. The application that uses the bound token must handle reloading the token when it rotates. The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours. 14.3. Creating bound service account tokens outside the pod Prerequisites You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Create the bound service account token outside the pod by running the following command: USD oc create token build-robot Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ Additional resources Rebooting a node gracefully Creating service accounts | [
"oc edit authentications cluster",
"spec: serviceAccountIssuer: https://test.default.svc 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4",
"oc create -f pod-projected-svc-token.yaml",
"oc create token build-robot",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/bound-service-account-tokens |
5.105. icedtea-web | 5.105. icedtea-web 5.105.1. RHSA-2012:1132 - Important: icedtea-web security update Updated icedtea-web packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IcedTea-Web project provides a Java web browser plug-in and an implementation of Java Web Start, which is based on the Netx project. It also contains a configuration tool for managing deployment settings for the plug-in and Web Start implementations. Security Fixes CVE-2012-3422 An uninitialized pointer use flaw was found in the IcedTea-Web plug-in. Visiting a malicious web page could possibly cause a web browser using the IcedTea-Web plug-in to crash, disclose a portion of its memory, or execute arbitrary code. CVE-2012-3423 It was discovered that the IcedTea-Web plug-in incorrectly assumed all strings received from the browser were NUL terminated. When using the plug-in with a web browser that does not NUL terminate strings, visiting a web page containing a Java applet could possibly cause the browser to crash, disclose a portion of its memory, or execute arbitrary code. Red Hat would like to thank Chamal De Silva for reporting the CVE-2012-3422 issue. This erratum also upgrades IcedTea-Web to version 1.2.1. Refer to the NEWS file for further information. All IcedTea-Web users should upgrade to these updated packages, which resolve these issues. Web browsers using the IcedTea-Web browser plug-in must be restarted for this update to take effect. 5.105.2. RHSA-2012:1434 - Critical: icedtea-web security update Updated icedtea-web packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IcedTea-Web project provides a Java web browser plug-in and an implementation of Java Web Start, which is based on the Netx project. It also contains a configuration tool for managing deployment settings for the plug-in and Web Start implementations. Security Fix CVE-2012-4540 A buffer overflow flaw was found in the IcedTea-Web plug-in. Visiting a malicious web page could cause a web browser using the IcedTea-Web plug-in to crash or, possibly, execute arbitrary code. Red Hat would like to thank Arthur Gerkis for reporting this issue. This erratum also upgrades IcedTea-Web to version 1.2.2. Refer to the NEWS file for further information: http://icedtea.classpath.org/hg/release/icedtea-web-1.2/file/icedtea-web-1.2.2/NEWS All IcedTea-Web users should upgrade to these updated packages, which resolve this issue. Web browsers using the IcedTea-Web browser plug-in must be restarted for this update to take effect. 5.105.3. RHBA-2012:0845 - icedtea-web bug fix and enhancement update Updated icedtea-web packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. IcedTea-Web provides a Java web browser plug-in, a Java Web Start implementation, and the IcedTea Web Control Panel. The icedtea-web packages have been upgraded to upstream version 1.2, which provides a number of bug fixes and enhancements over the version. (BZ# 756843 ) Note: This update is not compatible with Firefox 3.6 and earlier. If you are using such a Firefox version, upgrade to a later supported version before applying this update. All users of icedtea-web are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/icedtea-web |
Appendix D. Red Hat Virtualization and Encrypted Communication | Appendix D. Red Hat Virtualization and Encrypted Communication D.1. Replacing the Red Hat Virtualization Manager CA Certificate Warning Do not change the permissions and ownerships for the /etc/pki directory or any subdirectories. The permission for the /etc/pki and the /etc/pki/ovirt-engine directory must remain as the default, 755 . You can configure your organization's third-party CA certificate to identify the Red Hat Virtualization Manager to users connecting over HTTPS. Note Using a third-party CA certificate for HTTPS connections does not affect the certificate used for authentication between the Manager and hosts. They will continue to use the self-signed certificate generated by the Manager. Prerequisites A third-party CA certificate. This is the certificate of the CA (Certificate Authority) that issued the certificate you want to use. It is provided as a PEM file. The certificate chain must be complete up to the root certificate. The chain's order is critical and must be from the last intermediate certificate to the root certificate. This procedure assumes that the third-party CA certificate is provided in /tmp/3rd-party-ca-cert.pem . The private key that you want to use for Apache httpd. It must not have a password. This procedure assumes that it is located in /tmp/apache.key . The certificate issued by the CA. This procedure assumes that it is located in /tmp/apache.cer . If you received the private key and certificate from your CA in a P12 file, use the following procedure to extract them. For other file formats, contact your CA. After extracting the private key and certificate, proceed to Replacing the Red Hat Virtualization Manager Apache CA Certificate . Extracting the Certificate and Private Key from a P12 Bundle The internal CA stores the internally generated key and certificate in a P12 file, in /etc/pki/ovirt-engine/keys/apache.p12 . Red Hat recommends storing your new file in the same location. The following procedure assumes that the new P12 file is in /tmp/apache.p12 . Back up the current apache.p12 file: Replace the current file with the new file: Extract the private key and certificate to the required locations. If the file is password protected, you must add -passin pass:_password_ , replacing password with the required password. Important For new Red Hat Virtualization installations, you must complete all of the steps in this procedure. If you upgraded from a Red Hat Enterprise Virtualization 3.6 environment with a commercially signed certificate already configured, only steps 1, 8, and 9 are required. Replacing the Red Hat Virtualization Manager Apache CA Certificate If you are using a self-hosted engine, put the environment into global maintenance mode. For more information, see Section 15.1, "Maintaining the Self-Hosted Engine" . Add your CA certificate to the host-wide trust store: The Manager has been configured to use /etc/pki/ovirt-engine/apache-ca.pem , which is symbolically linked to /etc/pki/ovirt-engine/ca.pem . Remove the symbolic link: Save your CA certificate as /etc/pki/ovirt-engine/apache-ca.pem : Back up the existing private key and certificate: Copy the private key to the required location: Set the private key owner to root and set the permissions to 0640 : Copy the certificate to the required location: Restart the Apache server: Create a new trust store configuration file, /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf , with the following parameters: Copy the /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf file, and rename it with an index number that is greater than 10 (for example, 99-setup.conf ). Add the following parameters to the new file: Restart the websocket-proxy service: If you manually changed the /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf file, or are using a configuration file from an older installation, make sure that the Manager is still configured to use /etc/pki/ovirt-engine/apache-ca.pem as the certificate source. Enable engine-backup to update the system on restore by creating a new file, /etc/ovirt-engine-backup/engine-backup-config.d/update-system-wide-pki.sh , with the following content: Restart the ovirt-provider-ovn service: Restart the ovirt-engine service: If you are using a self-hosted engine, turn off global maintenance mode. Your users can now connect to the Administration Portal and VM Portal, without seeing a warning about the authenticity of the certificate used to encrypt HTTPS traffic. | [
"cp -p /etc/pki/ovirt-engine/keys/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12.bck",
"cp /tmp/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12",
"openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nocerts -nodes > /tmp/apache.key openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nokeys > /tmp/apache.cer",
"hosted-engine --set-maintenance --mode=global",
"cp /tmp/ 3rd-party-ca-cert .pem /etc/pki/ca-trust/source/anchors update-ca-trust",
"rm /etc/pki/ovirt-engine/apache-ca.pem",
"cp /tmp/ 3rd-party-ca-cert .pem /etc/pki/ovirt-engine/apache-ca.pem",
"cp /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache.key.nopass.bck cp /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache.cer.bck",
"cp /tmp/apache.key /etc/pki/ovirt-engine/keys/apache.key.nopass",
"chown root:ovirt /etc/pki/ovirt-engine/keys/apache.key.nopass chmod 640 /etc/pki/ovirt-engine/keys/apache.key.nopass",
"cp /tmp/apache.cer /etc/pki/ovirt-engine/certs/apache.cer",
"systemctl restart httpd.service",
"ENGINE_HTTPS_PKI_TRUST_STORE=\"/etc/pki/java/cacerts\" ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD=\"\"",
"SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/apache.cer SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass",
"systemctl restart ovirt-websocket-proxy.service",
"BACKUP_PATHS=\"USD{BACKUP_PATHS} /etc/ovirt-engine-backup\" cp -f /etc/pki/ovirt-engine/apache-ca.pem /etc/pki/ca-trust/source/anchors/ 3rd-party-ca-cert .pem update-ca-trust",
"systemctl restart ovirt-provider-ovn.service",
"systemctl restart ovirt-engine.service",
"hosted-engine --set-maintenance --mode=none"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/appe-red_hat_enterprise_virtualization_and_ssl |
Chapter 9. Clair configuration overview | Chapter 9. Clair configuration overview Clair is configured by a structured YAML file. Each Clair node needs to specify what mode it will run in and a path to a configuration file through CLI flags or environment variables. For example: USD clair -conf ./path/to/config.yaml -mode indexer or USD clair -conf ./path/to/config.yaml -mode matcher The aforementioned commands each start two Clair nodes using the same configuration file. One runs the indexing facilities, while other runs the matching facilities. If you are running Clair in combo mode, you must supply the indexer, matcher, and notifier configuration blocks in the configuration. 9.1. Information about using Clair in a proxy environment Environment variables respected by the Go standard library can be specified if needed, for example: HTTP_PROXY USD export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port> HTTPS_PROXY . USD export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port> SSL_CERT_DIR USD export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates> NO_PROXY USD export NO_PROXY=<comma_separated_list_of_hosts_and_domains> If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. For example, the osv updater requires access to https://osv-vulnerabilities.storage.googleapis.com to fetch ecosystem data dumps. In this scenario, the URL must be added to the proxy allowlist. For a full list of updater URLs, see "Clair updater URLs". You must also ensure that the standard Clair URLs are added to the proxy allowlist: https://search.maven.org/solrsearch/select https://catalog.redhat.com/api/containers/ https://access.redhat.com/security/data/metrics/repository-to-cpe.json https://access.redhat.com/security/data/metrics/container-name-repos-map.json When configuring the proxy server, take into account any authentication requirements or specific proxy settings needed to enable seamless communication between Clair and these URLs. By thoroughly documenting and addressing these considerations, you can ensure that Clair functions effectively while routing its updater traffic through the proxy. 9.2. Clair configuration reference The following YAML shows an example Clair configuration: http_listen_addr: "" introspection_addr: "" log_level: "" tls: {} indexer: connstring: "" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: "" indexer_addr: "" migrations: false period: "" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: "" migrations: false indexer_addr: "" matcher_addr: "" poll_interval: "" delivery_interval: "" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: "" probability: null jaeger: agent: endpoint: "" collector: endpoint: "" username: null password: null service_name: "" tags: nil buffer_max: 0 metrics: name: "" prometheus: endpoint: null dogstatsd: url: "" Note The above YAML file lists every key for completeness. Using this configuration file as-is will result in some options not having their defaults set normally. 9.3. Clair general fields The following table describes the general configuration fields available for a Clair deployment. Field Typhttp_listen_ae Description http_listen_addr String Configures where the HTTP API is exposed. Default: :6060 introspection_addr String Configures where Clair's metrics and health endpoints are exposed. log_level String Sets the logging level. Requires one of the following strings: debug-color , debug , info , warn , error , fatal , panic tls String A map containing the configuration for serving the HTTP API of TLS/SSL and HTTP/2. .cert String The TLS certificate to be used. Must be a full-chain certificate. Example configuration for general Clair fields The following example shows a Clair configuration. Example configuration for general Clair fields # ... http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info # ... 9.4. Clair indexer configuration fields The following table describes the configuration fields for Clair's indexer component. Field Type Description indexer Object Provides Clair indexer node configuration. .airgap Boolean Disables HTTP access to the internet for indexers and fetchers. Private IPv4 and IPv6 addresses are allowed. Database connections are unaffected. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .index_report_request_concurrency Integer Rate limits the number of index report creation requests. Setting this to 0 attemps to auto-size this value. Setting a negative value means unlimited. The auto-sizing is a multiple of the number of available cores. The API returns a 429 status code if concurrency is exceeded. .scanlock_retry Integer A positive integer representing seconds. Concurrent indexers lock on manifest scans to avoid clobbering. This value tunes how often a waiting indexer polls for the lock. .layer_scan_concurrency Integer Positive integer limiting the number of concurrent layer scans. Indexers will match a manifest's layer concurrently. This value tunes the number of layers an indexer scans in parallel. .migrations Boolean Whether indexer nodes handle migrations to their database. .scanner String Indexer configuration. Scanner allows for passing configuration options to layer scanners. The scanner will have this configuration pass to it on construction if designed to do so. .scanner.dist String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.package String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.repo String A map with the name of a particular scanner and arbitrary YAML as a value. Example indexer configuration The following example shows a hypothetical indexer configuration for Clair. Example indexer configuration # ... indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true # ... 9.5. Clair matcher configuration fields The following table describes the configuration fields for Clair's matcher component. Note Differs from matchers configuration fields. Field Type Description matcher Object Provides Clair matcher node configuration. .cache_age String Controls how long users should be hinted to cache responses for. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .max_conn_pool Integer Limits the database connection pool size. Clair allows for a custom connection pool size. This number directly sets how many active database connections are allowed concurrently. This parameter will be ignored in a future version. Users should configure this through the connection string. .indexer_addr String A matcher contacts an indexer to create a vulnerability report. The location of this indexer is required. Defaults to 30m . .migrations Boolean Whether matcher nodes handle migrations to their databases. .period String Determines how often updates for new security advisories take place. Defaults to 6h . .disable_updaters Boolean Whether to run background updates or not. Default: False .update_retention Integer Sets the number of update operations to retain between garbage collection cycles. This should be set to a safe MAX value based on database size constraints. Defaults to 10m . If a value of less than 0 is provided, garbage collection is disabled. 2 is the minimum value to ensure updates can be compared to notifications. Example matcher configuration Example matcher configuration # ... matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2 # ... 9.6. Clair matchers configuration fields The following table describes the configuration fields for Clair's matchers component. Note Differs from matcher configuration fields. Table 9.1. Matchers configuration fields Field Type Description matchers Array of strings Provides configuration for the in-tree matchers . .names String A list of string values informing the matcher factory about enabled matchers. If value is set to null , the default list of matchers run. The following strings are accepted: alpine-matcher , aws-matcher , debian-matcher , gobin , java-maven , oracle , photon , python , rhel , rhel-container-matcher , ruby , suse , ubuntu-matcher .config String Provides configuration to a specific matcher. A map keyed by the name of the matcher containing a sub-object which will be provided to the matchers factory constructor. For example: Example matchers configuration The following example shows a hypothetical Clair deployment that only requires only the alpine , aws , debian , oracle matchers. Example matchers configuration # ... matchers: names: - "alpine-matcher" - "aws" - "debian" - "oracle" # ... 9.7. Clair updaters configuration fields The following table describes the configuration fields for Clair's updaters component. Table 9.2. Updaters configuration fields Field Type Description updaters Object Provides configuration for the matcher's update manager. .sets String A list of values informing the update manager which updaters to run. If value is set to null , the default set of updaters runs the following: alpine , aws , clair.cvss , debian , oracle , photon , osv , rhel , rhcc suse , ubuntu If left blank, zero updaters run. .config String Provides configuration to specific updater sets. A map keyed by the name of the updater set containing a sub-object which will be provided to the updater set's constructor. For a list of the sub-objects for each updater, see "Advanced updater configuration". Example updaters configuration In the following configuration, only the rhel set is configured. The ignore_unpatched variable, which is specific to the rhel updater, is also defined. Example updaters configuration # ... updaters: sets: - rhel config: rhel: ignore_unpatched: false # ... 9.8. Clair notifier configuration fields The general notifier configuration fields for Clair are listed below. Field Type Description notifier Object Provides Clair notifier node configuration. .connstring String Postgres connection string. Accepts format as URL, or libpq connection string. .migrations Boolean Whether notifier nodes handle migrations to their database. .indexer_addr String A notifier contacts an indexer to create or obtain manifests affected by vulnerabilities. The location of this indexer is required. .matcher_addr String A notifier contacts a matcher to list update operations and acquire diffs. The location of this matcher is required. .poll_interval String The frequency at which the notifier will query a matcher for update operations. .delivery_interval String The frequency at which the notifier attempts delivery of created, or previously failed, notifications. .disable_summary Boolean Controls whether notifications should be summarized to one per manifest. Example notifier configuration The following notifier snippet is for a minimal configuration. Example notifier configuration # ... notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" headers: "" amqp: null stomp: null # ... 9.8.1. Clair webhook configuration fields The following webhook fields are available for the Clair notifier environment. Table 9.3. Clair webhook fields .webhook Object Configures the notifier for webhook delivery. .webhook.target String URL where the webhook will be delivered. .webhook.callback String The callback URL where notifications can be retrieved. The notification ID will be appended to this URL. This will typically be where the Clair notifier is hosted. .webhook.headers String A map associating a header name to a list of values. Example webhook configuration Example webhook configuration # ... notifier: # ... webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" # ... 9.8.2. Clair amqp configuration fields The following Advanced Message Queuing Protocol (AMQP) fields are available for the Clair notifier environment. .amqp Object Configures the notifier for AMQP delivery. [NOTE] ==== Clair does not declare any AMQP components on its own. All attempts to use an exchange or queue are passive only and will fail. Broker administrators should setup exchanges and queues ahead of time. ==== .amqp.direct Boolean If true , the notifier will deliver individual notifications (not a callback) to the configured AMQP broker. .amqp.rollup Integer When amqp.direct is set to true , this value informs the notifier of how many notifications to send in a direct delivery. For example, if direct is set to true , and amqp.rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .amqp.exchange Object The AMQP exchange to connect to. .amqp.exchange.name String The name of the exchange to connect to. .amqp.exchange.type String The type of the exchange. Typically one of the following: direct , fanout , topic , headers . .amqp.exchange.durability Boolean Whether the configured queue is durable. .amqp.exchange.auto_delete Boolean Whether the configured queue uses an auto_delete_policy . .amqp.routing_key String The name of the routing key each notification is sent with. .amqp.callback String If amqp.direct is set to false , this URL is provided in the notification callback sent to the broker. This URL should point to Clair's notification API endpoint. .amqp.uris String A list of one or more AMQP brokers to connect to, in priority order. .amqp.tls Object Configures TLS/SSL connection to an AMQP broker. .amqp.tls.root_ca String The filesystem path where a root CA can be read. .amqp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. [NOTE] ==== Clair also allows SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .amqp.tls.key String The filesystem path where a TLS/SSL private key can be read. Example AMQP configuration The following example shows a hypothetical AMQP configuration for Clair. Example AMQP configuration # ... notifier: # ... amqp: exchange: name: "" type: "direct" durable: true auto_delete: false uris: ["amqp://user:pass@host:10000/vhost"] direct: false routing_key: "notifications" callback: "http://clair-notifier/notifier/api/v1/notifications" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 9.8.3. Clair STOMP configuration fields The following Simple Text Oriented Message Protocol (STOMP) fields are available for the Clair notifier environment. .stomp Object Configures the notifier for STOMP delivery. .stomp.direct Boolean If true , the notifier delivers individual notifications (not a callback) to the configured STOMP broker. .stomp.rollup Integer If stomp.direct is set to true , this value limits the number of notifications sent in a single direct delivery. For example, if direct is set to true , and rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .stomp.callback String If stomp.callback is set to false , the provided URL in the notification callback is sent to the broker. This URL should point to Clair's notification API endpoint. .stomp.destination String The STOMP destination to deliver notifications to. .stomp.uris String A list of one or more STOMP brokers to connect to in priority order. .stomp.tls Object Configured TLS/SSL connection to STOMP broker. .stomp.tls.root_ca String The filesystem path where a root CA can be read. [NOTE] ==== Clair also respects SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .stomp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. .stomp.tls.key String The filesystem path where a TLS/SSL private key can be read. .stomp.user String Configures login details for the STOMP broker. .stomp.user.login String The STOMP login to connect with. .stomp.user.passcode String The STOMP passcode to connect with. Example STOMP configuration The following example shows a hypothetical STOMP configuration for Clair. Example STOMP configuration # ... notifier: # ... stomp: desitnation: "notifications" direct: false callback: "http://clair-notifier/notifier/api/v1/notifications" login: login: "username" passcode: "passcode" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 9.9. Clair authorization configuration fields The following authorization configuration fields are available for Clair. Field Type Description auth Object Defines Clair's external and intra-service JWT based authentication. If multiple auth mechanisms are defined, Clair picks one. Currently, multiple mechanisms are unsupported. .psk String Defines pre-shared key authentication. .psk.key String A shared base64 encoded key distributed between all parties signing and verifying JWTs. .psk.iss String A list of JWT issuers to verify. An empty list accepts any issuer in a JWT claim. Example authorization configuration The following authorization snippet is for a minimal configuration. Example authorization configuration # ... auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: ["quay"] # ... 9.10. Clair trace configuration fields The following trace configuration fields are available for Clair. Field Type Description trace Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the application traces will belong to. .probability Integer The probability a trace will occur. .jaeger Object Defines values for Jaeger tracing. .jaeger.agent Object Defines values for configuring delivery to a Jaeger agent. .jaeger.agent.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector Object Defines values for configuring delivery to a Jaeger collector. .jaeger.collector.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector.username String A Jaeger username. .jaeger.collector.password String A Jaeger password. .jaeger.service_name String The service name registered in Jaeger. .jaeger.tags String Key-value pairs to provide additional metadata. .jaeger.buffer_max Integer The maximum number of spans that can be buffered in memory before they are sent to the Jaeger backend for storage and analysis. Example trace configuration The following example shows a hypothetical trace configuration for Clair. Example trace configuration # ... trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" # ... 9.11. Clair metrics configuration fields The following metrics configuration fields are available for Clair. Field Type Description metrics Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the metrics in use. .prometheus String Configuration for a Prometheus metrics exporter. .prometheus.endpoint String Defines the path where metrics are served. Example metrics configuration The following example shows a hypothetical metrics configuration for Clair. Example metrics configuration # ... metrics: name: "prometheus" prometheus: endpoint: "/metricsz" # ... | [
"clair -conf ./path/to/config.yaml -mode indexer",
"clair -conf ./path/to/config.yaml -mode matcher",
"export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>",
"export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>",
"export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>",
"export NO_PROXY=<comma_separated_list_of_hosts_and_domains>",
"http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"",
"http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info",
"indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true",
"matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2",
"matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"",
"updaters: sets: - rhel config: rhel: ignore_unpatched: false",
"notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null",
"notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"",
"notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"",
"notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"",
"auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]",
"trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"",
"metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\""
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/config-fields-overview |
28.5.3. Saving Package Information | 28.5.3. Saving Package Information In a single-machine ABRT installation, problems are usually reported to external bug databases such as RHTSupport or Bugzilla. Reporting to these bug databases usually requires knowledge about the component and package in which the problem occurred. The post-create event runs the abrt-action-save-package-data tool (among other steps) in order to provide this information in the standard ABRT installation. If you are setting up a centralized crash collection system, your requirements may be significantly different. Depending on your needs, you have two options: Internal analysis of problems After collecting problem data, you do not need to collect package information if you plan to analyze problems in-house, without reporting them to any external bug databases. You might be also interested in collecting crashes that occur in programs written by your organization or third-party applications installed on your system. If such a program is a part of an RPM package, then on client systems and a dedicated crash collecting system , you can only add the respective GPG key to the /etc/abrt/gpg_keys file or set the following line in the /etc/abrt/abrt-action-save-package-data.conf file: If the program does not belong to any RPM package, take the following steps on both, client systems and a dedicated crash collecting system : Remove the following rule from the /etc/libreport/events.d/abrt_event.conf file: EVENT=post-create component= abrt-action-save-package-data Prevent deletion of problem data directories which do not correspond to any installed package by setting the following directive in the /etc/abrt/abrt-action-save-package-data.conf file: Reporting to external bug database Alternatively, you may want to report crashes to RHTSupport or Bugzilla. In this case, you need to collect package information. Generally, client machines and dedicated crash collecting systems have non-identical sets of installed packages. Therefore, it may happen that problem data uploaded from a client does not correspond to any package installed on the dedicated crash collecting system. In the standard ABRT configuration, this will lead to deletion of problem data (ABRT will consider it to be a crash in an unpackaged executable). To prevent this from happening, it is necessary to modify ABRT 's configuration on the dedicated system in the following way: Prevent inadvertent collection of package information for problem data uploaded from client machines, by adding the remote!=1 condition in the /etc/libreport/events.d/abrt_event.conf file: EVENT=post-create remote!=1 component= abrt-action-save-package-data Prevent deletion of problem data directories which do not correspond to any installed package by setting the following directive in /etc/abrt/abrt-action-save-package-data.conf : Note Note that in this case, no such modifications are necessary on client systems: they continue to collect package information, and continue to ignore crashes in unpackaged executables. | [
"OpenGPGCheck = no",
"EVENT=post-create component= abrt-action-save-package-data",
"ProcessUnpackaged = yes",
"EVENT=post-create remote!=1 component= abrt-action-save-package-data",
"ProcessUnpackaged = yes"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-centralized_crash_collection-saving_package_information |
Chapter 4. Planning and implementing TLS | Chapter 4. Planning and implementing TLS TLS (Transport Layer Security) is a cryptographic protocol used to secure network communications. When hardening system security settings by configuring preferred key-exchange protocols, authentication methods, and encryption algorithms, it is necessary to bear in mind that the broader the range of supported clients, the lower the resulting security. Conversely, strict security settings lead to limited compatibility with clients, which can result in some users being locked out of the system. Be sure to target the strictest available configuration and only relax it when it is required for compatibility reasons. 4.1. SSL and TLS protocols The Secure Sockets Layer (SSL) protocol was originally developed by Netscape Corporation to provide a mechanism for secure communication over the Internet. Subsequently, the protocol was adopted by the Internet Engineering Task Force (IETF) and renamed to Transport Layer Security (TLS). The TLS protocol sits between an application protocol layer and a reliable transport layer, such as TCP/IP. It is independent of the application protocol and can thus be layered underneath many different protocols, for example: HTTP, FTP, SMTP, and so on. Protocol version Usage recommendation SSL v2 Do not use. Has serious security vulnerabilities. Removed from the core crypto libraries since RHEL 7. SSL v3 Do not use. Has serious security vulnerabilities. Removed from the core crypto libraries since RHEL 8. TLS 1.0 Not recommended to use. Has known issues that cannot be mitigated in a way that guarantees interoperability, and does not support modern cipher suites. In RHEL 8, enabled only in the LEGACY system-wide cryptographic policy profile. TLS 1.1 Use for interoperability purposes where needed. Does not support modern cipher suites. In RHEL 8, enabled only in the LEGACY policy. TLS 1.2 Supports the modern AEAD cipher suites. This version is enabled in all system-wide crypto policies, but optional parts of this protocol contain vulnerabilities and TLS 1.2 also allows outdated algorithms. TLS 1.3 Recommended version. TLS 1.3 removes known problematic options, provides additional privacy by encrypting more of the negotiation handshake and can be faster thanks usage of more efficient modern cryptographic algorithms. TLS 1.3 is also enabled in all system-wide cryptographic policies. Additional resources IETF: The Transport Layer Security (TLS) Protocol Version 1.3 4.2. Security considerations for TLS in RHEL 8 In RHEL 8, cryptography-related considerations are significantly simplified thanks to the system-wide crypto policies. The DEFAULT crypto policy allows only TLS 1.2 and 1.3. To allow your system to negotiate connections using the earlier versions of TLS, you need to either opt out from following crypto policies in an application or switch to the LEGACY policy with the update-crypto-policies command. See Using system-wide cryptographic policies for more information. The default settings provided by libraries included in RHEL 8 are secure enough for most deployments. The TLS implementations use secure algorithms where possible while not preventing connections from or to legacy clients or servers. Apply hardened settings in environments with strict security requirements where legacy clients or servers that do not support secure algorithms or protocols are not expected or allowed to connect. The most straightforward way to harden your TLS configuration is switching the system-wide cryptographic policy level to FUTURE using the update-crypto-policies --set FUTURE command. Warning Algorithms disabled for the LEGACY cryptographic policy do not conform to Red Hat's vision of RHEL 8 security, and their security properties are not reliable. Consider moving away from using these algorithms instead of re-enabling them. If you do decide to re-enable them, for example for interoperability with old hardware, treat them as insecure and apply extra protection measures, such as isolating their network interactions to separate network segments. Do not use them across public networks. If you decide to not follow RHEL system-wide crypto policies or create custom cryptographic policies tailored to your setup, use the following recommendations for preferred protocols, cipher suites, and key lengths on your custom configuration: 4.2.1. Protocols The latest version of TLS provides the best security mechanism. Unless you have a compelling reason to include support for older versions of TLS, allow your systems to negotiate connections using at least TLS version 1.2. Note that even though RHEL 8 supports TLS version 1.3, not all features of this protocol are fully supported by RHEL 8 components. For example, the 0-RTT (Zero Round Trip Time) feature, which reduces connection latency, is not yet fully supported by the Apache web server. 4.2.2. Cipher suites Modern, more secure cipher suites should be preferred to old, insecure ones. Always disable the use of eNULL and aNULL cipher suites, which do not offer any encryption or authentication at all. If at all possible, ciphers suites based on RC4 or HMAC-MD5, which have serious shortcomings, should also be disabled. The same applies to the so-called export cipher suites, which have been intentionally made weaker, and thus are easy to break. While not immediately insecure, cipher suites that offer less than 128 bits of security should not be considered for their short useful life. Algorithms that use 128 bits of security or more can be expected to be unbreakable for at least several years, and are thus strongly recommended. Note that while 3DES ciphers advertise the use of 168 bits, they actually offer 112 bits of security. Always prefer cipher suites that support (perfect) forward secrecy (PFS), which ensures the confidentiality of encrypted data even in case the server key is compromised. This rules out the fast RSA key exchange, but allows for the use of ECDHE and DHE. Of the two, ECDHE is the faster and therefore the preferred choice. You should also prefer AEAD ciphers, such as AES-GCM, over CBC-mode ciphers as they are not vulnerable to padding oracle attacks. Additionally, in many cases, AES-GCM is faster than AES in CBC mode, especially when the hardware has cryptographic accelerators for AES. Note also that when using the ECDHE key exchange with ECDSA certificates, the transaction is even faster than a pure RSA key exchange. To provide support for legacy clients, you can install two pairs of certificates and keys on a server: one with ECDSA keys (for new clients) and one with RSA keys (for legacy ones). 4.2.3. Public key length When using RSA keys, always prefer key lengths of at least 3072 bits signed by at least SHA-256, which is sufficiently large for true 128 bits of security. Warning The security of your system is only as strong as the weakest link in the chain. For example, a strong cipher alone does not guarantee good security. The keys and the certificates are just as important, as well as the hash functions and keys used by the Certification Authority (CA) to sign your keys. Additional resources System-wide crypto policies in RHEL 8 update-crypto-policies(8) man page on your system 4.3. Hardening TLS configuration in applications In RHEL, system-wide crypto policies provide a convenient way to ensure that your applications that use cryptographic libraries do not allow known insecure protocols, ciphers, or algorithms. If you want to harden your TLS-related configuration with your customized cryptographic settings, you can use the cryptographic configuration options described in this section, and override the system-wide crypto policies just in the minimum required amount. Regardless of the configuration you choose to use, always ensure that your server application enforces server-side cipher order , so that the cipher suite to be used is determined by the order you configure. 4.3.1. Configuring the Apache HTTP server to use TLS The Apache HTTP Server can use both OpenSSL and NSS libraries for its TLS needs. RHEL 8 provides the mod_ssl functionality through eponymous packages: The mod_ssl package installs the /etc/httpd/conf.d/ssl.conf configuration file, which can be used to modify the TLS-related settings of the Apache HTTP Server . Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server , including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf configuration file are described in detail in the /usr/share/httpd/manual/mod/mod_ssl.html file. Examples of various settings are described in the /usr/share/httpd/manual/ssl/ssl_howto.html file. When modifying the settings in the /etc/httpd/conf.d/ssl.conf configuration file, be sure to consider the following three directives at the minimum: SSLProtocol Use this directive to specify the version of TLS or SSL you want to allow. SSLCipherSuite Use this directive to specify your preferred cipher suite or disable the ones you want to disallow. SSLHonorCipherOrder Uncomment and set this directive to on to ensure that the connecting clients adhere to the order of ciphers you specified. For example, to use only the TLS 1.2 and 1.3 protocol: See the Configuring TLS encryption on an Apache HTTP Server chapter in the Deploying different types of servers document for more information. 4.3.2. Configuring the Nginx HTTP and proxy server to use TLS To enable TLS 1.3 support in Nginx , add the TLSv1.3 value to the ssl_protocols option in the server section of the /etc/nginx/nginx.conf configuration file: See the Adding TLS encryption to an Nginx web server chapter in the Deploying different types of servers document for more information. 4.3.3. Configuring the Dovecot mail server to use TLS To configure your installation of the Dovecot mail server to use TLS, modify the /etc/dovecot/conf.d/10-ssl.conf configuration file. You can find an explanation of some of the basic configuration directives available in that file in the /usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt file, which is installed along with the standard installation of Dovecot . When modifying the settings in the /etc/dovecot/conf.d/10-ssl.conf configuration file, be sure to consider the following three directives at the minimum: ssl_protocols Use this directive to specify the version of TLS or SSL you want to allow or disable. ssl_cipher_list Use this directive to specify your preferred cipher suites or disable the ones you want to disallow. ssl_prefer_server_ciphers Uncomment and set this directive to yes to ensure that the connecting clients adhere to the order of ciphers you specified. For example, the following line in /etc/dovecot/conf.d/10-ssl.conf allows only TLS 1.1 and later: Additional resources Deploying different types of servers on RHEL 8 config(5) and ciphers(1) man pages. Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) . Mozilla SSL Configuration Generator . SSL Server Test . | [
"yum install mod_ssl",
"SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1",
"server { listen 443 ssl http2; listen [::]:443 ssl http2; . ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers . }",
"ssl_protocols = !SSLv2 !SSLv3 !TLSv1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/securing_networks/planning-and-implementing-tls_securing-networks |
Part II. Configuring Business Central settings and properties | Part II. Configuring Business Central settings and properties As an administrator, you can customize the following on the admin Settings page: Roles : Set the home page, priority, and permissions of a role. Groups : Set the home page, priority, and permissions of a group as well as create and delete groups. Users : Create and delete users, add or remove groups and roles from users, and view user permissions. Artifacts : View M2 repository artifacts, upload artifacts, view, and download JAR files. Data Sources : Add, update, or delete data sources and database drivers. Data Sets : Create, modify, or delete data sets. Projects : View and edit project preferences such as file export properties, space properties, default values, and advanced GAV properties. Artifact Repository : Manage artifact repository properties. Languages : Set the Business Central language. Process Administration : Set the default pagination option in Business Central. Process Designer : Set diagram editor properties. SSH Keys : Add or delete SSH keys. Custom Tasks Administration : Enable or disable default service tasks and upload custom service tasks. Dashbuilder Data Transfer : Import and export Dashbuilder data as ZIP files in Business Central. Profiles : Set the workbench profile as Planner and Rules or Full . Archetypes : View, add, validate, set as default, and delete the archetypes. Used as a template when creating a new project in Business Central. Prerequisites Red Hat JBoss Enterprise Application Platform 7.4.14 is installed. For more information, see Red Hat JBoss Enterprise Application Platform 7.4 Installation Guide . Red Hat Decision Manager is installed and running. For more information, see Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . You are logged in to Business Central with the admin user role. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/assembly-configuring-central |
Chapter 9. Integrations | Chapter 9. Integrations 9.1. Integrating Service Mesh with OpenShift Serverless The OpenShift Serverless Operator provides Kourier as the default ingress for Knative. However, you can use Service Mesh with OpenShift Serverless whether Kourier is enabled or not. Integrating with Kourier disabled allows you to configure additional networking and routing options that the Kourier ingress does not support, such as mTLS functionality. Important OpenShift Serverless only supports the use of Red Hat OpenShift Service Mesh functionality that is explicitly documented in this guide, and does not support other undocumented features. 9.1.1. Prerequisites The examples in the following procedures use the domain example.com . The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate. To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organization. Example commands must be adjusted according to your domain, subdomain, and CA. You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is https://console-openshift-console.apps.openshift.example.com , you must configure the wildcard certificate so that the domain is *.apps.openshift.example.com . For more information about configuring wildcard certificates, see the following topic about Creating a certificate to encrypt incoming external traffic . If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up domain mapping for those domains. For more information, see the OpenShift Serverless documentation about Creating a custom domain mapping . 9.1.2. Creating a certificate to encrypt incoming external traffic By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a root certificate and private key that signs the certificates for your Knative services: USD openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example Inc./CN=example.com' \ -keyout root.key \ -out root.crt Create a wildcard certificate: USD openssl req -nodes -newkey rsa:2048 \ -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \ -keyout wildcard.key \ -out wildcard.csr Sign the wildcard certificate: USD openssl x509 -req -days 365 -set_serial 0 \ -CA root.crt \ -CAkey root.key \ -in wildcard.csr \ -out wildcard.crt Create a secret by using the wildcard certificate: USD oc create -n istio-system secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crt This certificate is picked up by the gateways created when you integrate OpenShift Serverless with Service Mesh, so that the ingress gateway serves traffic with this certificate. 9.1.3. Integrating Service Mesh with OpenShift Serverless You can integrate Service Mesh with OpenShift Serverless without using Kourier as the default ingress. To do this, do not install the Knative Serving component before completing the following procedure. There are additional steps required when creating the KnativeServing custom resource definition (CRD) to integrate Knative Serving with Service Mesh, which are not covered in the general Knative Serving installation procedure. This procedure might be useful if you want to integrate Service Mesh as the default and only ingress for your OpenShift Serverless installation. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the Red Hat OpenShift Service Mesh Operator and create a ServiceMeshControlPlane resource in the istio-system namespace. If you want to use mTLS functionality, you must also set the spec.security.dataPlane.mtls field for the ServiceMeshControlPlane resource to true . Important Using OpenShift Serverless with Service Mesh is only supported with Red Hat OpenShift Service Mesh version 2.0.5 or later. Install the OpenShift Serverless Operator. Install the OpenShift CLI ( oc ). Procedure Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members: apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - <namespace> 1 A list of namespaces to be integrated with Service Mesh. Important This list of namespaces must include the knative-serving namespace. Apply the ServiceMeshMemberRoll resource: USD oc apply -f <filename> Create the necessary gateways so that Service Mesh can accept traffic: Example knative-local-gateway object using HTTP apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 8081 name: http protocol: HTTP 2 hosts: - "*" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: "true" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081 1 Add the name of the secret that contains the wildcard certificate. 2 The knative-local-gateway serves HTTP traffic. Using HTTP means that traffic coming from outside of Service Mesh, but using an internal hostname, such as example.default.svc.cluster.local , is not encrypted. You can set up encryption for this path by creating another wildcard certificate and an additional gateway that uses a different protocol spec. Example knative-local-gateway object using HTTPS apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE credentialName: <wildcard_certs> Apply the Gateway resources: USD oc apply -f <filename> Install Knative Serving by creating the following KnativeServing custom resource definition (CRD), which also enables the Istio integration: apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: autoscaler annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" 1 Enables Istio integration. 2 Enables sidecar injection for Knative Serving data plane pods. Apply the KnativeServing resource: USD oc apply -f <filename> Create a Knative Service that has sidecar injection enabled and uses a pass-through route: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: "true" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 3 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: <image_url> 1 A namespace that is part of the Service Mesh member roll. 2 Instructs Knative Serving to generate an OpenShift Container Platform pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly. 3 Injects Service Mesh sidecars into the Knative service pods. Apply the Service resource: USD oc apply -f <filename> Verification Access your serverless application by using a secure connection that is now trusted by the CA: USD curl --cacert root.crt <service_url> Example command USD curl --cacert root.crt https://hello-default.apps.openshift.example.com Example output Hello Openshift! 9.1.4. Enabling Knative Serving metrics when using Service Mesh with mTLS If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default, because Service Mesh prevents Prometheus from scraping metrics. This section shows how to enable Knative Serving metrics when using Service Mesh and mTLS. Prerequisites You have installed the OpenShift Serverless Operator and Knative Serving on your cluster. You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled. You have access to an OpenShift Container Platform account with cluster administrator access. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Specify prometheus as the metrics.backend-destination in the observability spec of the Knative Serving custom resource (CR): apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: observability: metrics.backend-destination: "prometheus" ... This step prevents metrics from being disabled by default. Apply the following network policy to allow traffic from the Prometheus namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ns namespace: knative-serving spec: ingress: - from: - namespaceSelector: matchLabels: name: "openshift-monitoring" podSelector: {} ... Modify and reapply the default Service Mesh control plane in the istio-system namespace, so that it includes the following spec: ... spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444 ... 9.1.5. Integrating Service Mesh with OpenShift Serverless when Kourier is enabled You can use Service Mesh with OpenShift Serverless even if Kourier is already enabled. This procedure might be useful if you have already installed Knative Serving with Kourier enabled, but decide to add a Service Mesh integration later. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the OpenShift CLI ( oc ). Install the OpenShift Serverless Operator and Knative Serving on your cluster. Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh and Kourier is supported for use with both Red Hat OpenShift Service Mesh versions 1.x and 2.x. Procedure Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members: apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - <namespace> 1 ... 1 A list of namespaces to be integrated with Service Mesh. Apply the ServiceMeshMemberRoll resource: USD oc apply -f <filename> Create a network policy that permits traffic flow from Knative system pods to Knative services: For each namespace that you want to integrate with Service Mesh, create a NetworkPolicy resource: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-serving-system-namespace namespace: <namespace> 1 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/part-of: "openshift-serverless" podSelector: {} policyTypes: - Ingress ... 1 Add the namespace that you want to integrate with Service Mesh. Note The knative.openshift.io/part-of: "openshift-serverless" label was added in OpenShift Serverless 1.22.0. If you are using OpenShift Serverless 1.21.1 or earlier, add the knative.openshift.io/part-of label to the knative-serving and knative-serving-ingress namespaces. Add the label to the knative-serving namespace: USD oc label namespace knative-serving knative.openshift.io/part-of=openshift-serverless Add the label to the knative-serving-ingress namespace: USD oc label namespace knative-serving-ingress knative.openshift.io/part-of=openshift-serverless Apply the NetworkPolicy resource: USD oc apply -f <filename> 9.1.6. Improving net-istio memory usage by using secret filtering for Service Mesh By default, the informers implementation for the Kubernetes client-go library fetches all resources of a particular type. This can lead to a substantial overhead when many resources are available, which can cause the Knative net-istio ingress controller to fail on large clusters due to memory leaking. However, a filtering mechanism is available for the Knative net-istio ingress controller, which enables the controller to only fetch Knative related secrets. You can enable this mechanism by adding an annotation to the KnativeServing custom resource (CR). Important If you enable secret filtering, all of your secrets need to be labeled with networking.internal.knative.dev/certificate-uid: "<id>" . Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh only is supported for use with Red Hat OpenShift Service Mesh version 2.0.5 or later. Install the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). Procedure Add the serverless.openshift.io/enable-secret-informer-filtering annotation to the KnativeServing CR: Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/enable-secret-informer-filtering: "true" 1 spec: ingress: istio: enabled: true deployments: - annotations: sidecar.istio.io/inject: "true" sidecar.istio.io/rewriteAppHTTPProbers: "true" name: activator - annotations: sidecar.istio.io/inject: "true" sidecar.istio.io/rewriteAppHTTPProbers: "true" name: autoscaler 1 Adding this annotation injects an environment variable, ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true , to the net-istio controller pod. Note This annotation is ignored if you set a different value by overriding deployments. 9.2. Integrating Serverless with the cost management service Cost management is an OpenShift Container Platform service that enables you to better understand and track costs for clouds and containers. It is based on the open source Koku project. 9.2.1. Prerequisites You have cluster administrator permissions. You have set up cost management and added an OpenShift Container Platform source . 9.2.2. Using labels for cost management queries Labels, also known as tags in cost management, can be applied for nodes, namespaces or pods. Each label is a key and value pair. You can use a combination of multiple labels to generate reports. You can access reports about costs by using the Red Hat hybrid console . Labels are inherited from nodes to namespaces, and from namespaces to pods. However, labels are not overridden if they already exist on a resource. For example, Knative services have a default app=<revision_name> label: Example Knative service default label apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service spec: ... labels: app: <revision_name> ... If you define a label for a namespace, such as app=my-domain , the cost management service does not take into account costs coming from a Knative service with the tag app=<revision_name> when querying the application using the app=my-domain tag. Costs for Knative services that have this tag must be queried under the app=<revision_name> tag. 9.2.3. Additional resources Configure tagging for your sources Use the Cost Explorer to visualize your costs 9.3. Using NVIDIA GPU resources with serverless applications NVIDIA supports using GPU resources on OpenShift Container Platform. See GPU Operator on OpenShift for more information about setting up GPU resources on OpenShift Container Platform. 9.3.1. Specifying GPU requirements for a service After GPU resources are enabled for your OpenShift Container Platform cluster, you can specify GPU requirements for a Knative service using the Knative ( kn ) CLI. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. GPU resources are enabled for your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Note Using NVIDIA GPU resources is not supported for IBM Z and IBM Power. Procedure Create a Knative service and set the GPU resource requirement limit to 1 by using the --limit nvidia.com/gpu=1 flag: USD kn service create hello --image <service-image> --limit nvidia.com/gpu=1 A GPU resource requirement limit of 1 means that the service has 1 GPU resource dedicated. Services do not share GPU resources. Any other services that require GPU resources must wait until the GPU resource is no longer in use. A limit of 1 GPU also means that applications exceeding usage of 1 GPU resource are restricted. If a service requests more than 1 GPU resource, it is deployed on a node where the GPU resource requirements can be met. Optional. For an existing service, you can change the GPU resource requirement limit to 3 by using the --limit nvidia.com/gpu=3 flag: USD kn service update hello --limit nvidia.com/gpu=3 9.3.2. Additional resources Setting resource quotas for extended resources | [
"openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example Inc./CN=example.com' -keyout root.key -out root.crt",
"openssl req -nodes -newkey rsa:2048 -subj \"/CN=*.apps.openshift.example.com/O=Example Inc.\" -keyout wildcard.key -out wildcard.csr",
"openssl x509 -req -days 365 -set_serial 0 -CA root.crt -CAkey root.key -in wildcard.csr -out wildcard.crt",
"oc create -n istio-system secret tls wildcard-certs --key=wildcard.key --cert=wildcard.crt",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - <namespace>",
"oc apply -f <filename>",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 8081 name: http protocol: HTTP 2 hosts: - \"*\" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs>",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: autoscaler annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: \"true\" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 3 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: <image_url>",
"oc apply -f <filename>",
"curl --cacert root.crt <service_url>",
"curl --cacert root.crt https://hello-default.apps.openshift.example.com",
"Hello Openshift!",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: observability: metrics.backend-destination: \"prometheus\"",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ns namespace: knative-serving spec: ingress: - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" podSelector: {}",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - <namespace> 1",
"oc apply -f <filename>",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-serving-system-namespace namespace: <namespace> 1 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/part-of: \"openshift-serverless\" podSelector: {} policyTypes: - Ingress",
"oc label namespace knative-serving knative.openshift.io/part-of=openshift-serverless",
"oc label namespace knative-serving-ingress knative.openshift.io/part-of=openshift-serverless",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/enable-secret-informer-filtering: \"true\" 1 spec: ingress: istio: enabled: true deployments: - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: activator - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: autoscaler",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service spec: labels: app: <revision_name>",
"kn service create hello --image <service-image> --limit nvidia.com/gpu=1",
"kn service update hello --limit nvidia.com/gpu=3"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/serverless/integrations |
Chapter 2. Managing your Ansible automation controller subscription | Chapter 2. Managing your Ansible automation controller subscription Before you can use automation controller, you must have a valid subscription, which authorizes its use. 2.1. Obtaining an authorized Ansible automation controller subscription If you already have a subscription to a Red Hat product, you can acquire an automation controller subscription through that subscription. If you do not have a subscription to Red Hat Ansible Automation Platform and Red Hat Satellite, you can request a trial subscription. Procedure If you have a Red Hat Ansible Automation Platform subscription, use your Red Hat customer credentials when you launch the automation controller to access your subscription information. See Importing a subscription . If you have a non-Ansible Red Hat or Satellite subscription, access automation controller with one of these methods: Enter your username and password on the license page. Obtain a subscriptions manifest from the Subscription Allocations page on the Red Hat Customer Portal. For more information, see Obtaining a subscriptions manifest in the Automation controller User Guide . If you do not have a Red Hat Ansible Automation Platform subscription, go to Try Red Hat Ansible Automation Platform and request a trial subscription. Additional resources To understand what is supported with your subscription, see Automation controller licensing, updates and support . * If you have issues with your subscription, contact your Sales Account Manager or Red Hat Customer Service at: https://access.redhat.com/support/contact/customerService/ . 2.2. Importing a subscription After you have obtained an authorized Ansible Automation Platform subscription, you must import it into the automation controller system before you can use automation controller. Note You are opted in for Automation Analytics by default when you activate the automation controller on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, by doing the following: From the navigation panel, select Settings and select the Miscellaneous System settings option. Click Edit . Toggle the Gather data for Automation Analytics switch to the off position. Click Save . For opt-in of Automation Analytics to be effective, your instance of automation controller must be running on Red Hat Enterprise Linux. For more information, see the Automation Analytics section. Prerequisites You have obtained a subscriptions manifest. For more information, see Obtaining a subscriptions manifest . Procedure Launch automation controller for the first time. The Subscription Management screen displays. Retrieve and import your subscription by completing either of the following steps: If you have obtained a subscription manifest, upload it by navigating to the location where the file is saved. The subscription manifest is the complete .zip file, and not only its component parts. Note If the Browse option in the Subscription manifest option is disabled, clear the username and password fields to enable it. The subscription metadata is then retrieved from the RHSM/Satellite API, or from the manifest provided. If many subscription counts were applied in a single installation, automation controller combines the counts but uses the earliest expiration date as the expiry (at which point you must refresh your subscription). If you are using your Red Hat customer credentials, enter your username and password on the license page. Use your Satellite username or password if your automation controller cluster nodes are registered to Satellite with Subscription Manager. After you enter your credentials, click Get Subscriptions . Automation controller retrieves your configured subscription service. Then, it prompts you to select the subscription that you want to run and applies that metadata to automation controller. You can log in over time and retrieve new subscriptions if you have renewed. Click . Review and check the I agree to the End User License Agreement checkbox and click Submit . After your subscription is accepted, automation controller displays the subscription details and opens the Dashboard. To return to the Subscription settings screen from the Dashboard, select Settings Subscription settings from the Subscription option in the navigation panel. Optional: To return to the Subscription settings screen from the Dashboard, select Settings Subscription settings option in the navigation panel. Troubleshooting your subscription When your subscription expires (you can check this in the Subscription details of the Subscription settings window), you must renew it in automation controller. You can do this by either importing a new subscription, or setting up a new subscription. If you meet the "Error fetching licenses" message, check that you have the proper permissions required for the Satellite user. The automation controller administrator requires this to apply a subscription. The Satellite username and password is used to query the Satellite API for existing subscriptions. From the Satellite API, the automation controller receives metadata about those subscriptions, then filters through to find valid subscriptions that you can apply. These are then displayed as valid subscription options in the UI. The following Satellite roles grant proper access: Custom with view_subscriptions and view_organizations filter Viewer Administrator Organization Administrator Manager Use the Custom role for your automation controller integration, as it is the most restrictive. For more information, see the Satellite documentation on managing users and roles. Note The System Administrator role is not equal to the Administrator user checkbox, and does not offer enough permissions to access the subscriptions API page. 2.3. Troubleshooting: Keep your subscription in compliance Your subscription has two possible statuses: Compliant : Indicates that your subscription is appropriate for the number of hosts that you have automated within your subscription count. Out of compliance : Indicates that you have exceeded the number of hosts in your subscription. For more information, see Troubleshooting: Keeping your subscription in compliance in the Automation controller User Guide . 2.4. Host metric utilities Automation controller provides a way to generate a CSV output of the host metric data and host metric summary through the Command Line Interface (CLI). You can also soft delete hosts in bulk through the API. For more information, see the Host metrics utilities section of the Automation controller User Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_controller/controller-managing-subscriptions |
Chapter 9. Backing Up and Restoring Identity Management | Chapter 9. Backing Up and Restoring Identity Management Red Hat Enterprise Linux Identity Management provides a solution to manually back up and restore the IdM system, for example when a server stops performing correctly or data loss occurs. During backup, the system creates a directory containing information on your IdM setup and stores it. During restore, you can use this backup directory to bring your original IdM setup back. Important Use the backup and restore procedures described in this chapter only if you cannot rebuild the lost part of the IdM server group from the remaining servers in the deployment, by reinstalling the lost replicas as replicas of the remaining ones. The "Backup and Restore in IdM/IPA" Knowledgebase solution describes how to avoid losses by maintaining several server replicas. Rebuilding from an existing replica with the same data is preferable, because the backed-up version usually contains older, thus potentially outdated, information. The potential threat scenarios that backup and restore can prevent include: Catastrophic hardware failure on a machine occurs and the machine becomes incapable of further functioning. In this situation: Reinstall the operating system from scratch. Configure the machine with the same host name, fully qualified domain name (FQDN), and IP address. Install the IdM packages as well as all other optional packages relating to IdM that were present on the original system. Restore the full backup of the IdM server. An upgrade on an isolated machine fails. The operating system remains functional, but the IdM data is corrupted, which is why you want to restore the IdM system to a known good state. Important In cases of hardware or upgrade failure, such as the two mentioned above, restore from backup only if all replicas or a replica with a special role, such as the only certificate authority (CA), were lost. If a replica with the same data still exists, it is recommended to delete the lost replica and then rebuild it from the remaining one. Undesirable changes were made to the LDAP content, for example entries were deleted, and you want to revert them. Restoring backed-up LDAP data returns the LDAP entries to the state without affecting the IdM system itself. The restored server becomes the only source of information for IdM; other master servers are re-initialized from the restored server. Any data created after the last backup was made are lost. Therefore you should not use the backup and restore solution for normal system maintenance. If possible, always rebuild the lost server by reinstalling it as a replica. The backup and restore features can be managed only from the command line and are not available in the IdM web UI. 9.1. Full-Server Backup and Data-Only Backup IdM offers two backup options: Full-IdM server backup Full-server backup creates a backup copy of all the IdM server files as well as LDAP data, which makes it a standalone backup. IdM affects hundreds of files; the files that the backup process copies is a mix of whole directories and specific files, such as configuration files or log files, and relate directly to IdM or to various services that IdM depends on. Because the full-server backup is a raw file backup, it is performed offline. The script that performs the full-server backup stops all IdM services to ensure a safe course of the backup process. For the full list of files and directories that the full-server backup copies, see Section 9.1.3, "List of Directories and Files Copied During Backup" . Data-only Backup The data-only backup only creates a backup copy of LDAP data and the changelog. The process backs up the IPA-REALM instance and can also back up multiple back ends or only a single back end; the back ends include the IPA back end and the CA Dogtag back end. This type of backup also backs up a record of the LDAP content stored in LDIF (LDAP Data Interchange Format). The data-only backup can be performed both online and offline. By default, IdM stores the created backups in the /var/lib/ipa/backup/ directory. The naming conventions for the subdirectories containing the backups are: ipa-full-YEAR-MM-DD-HH-MM-SS in the GMT time zone for the full-server backup ipa-data-YEAR-MM-DD-HH-MM-SS in the GMT time zone for the data-only backup 9.1.1. Creating a Backup Both full-server and data-only backups are created using the ipa-backup utility which must always be run as root. To create a full-server backup, run ipa-backup . Important Performing a full-server backup stops all IdM services because the process must run offline. The IdM services will start again after the backup is finished. To create a data-only backup, run the ipa-backup --data command. You can add several additional options to ipa-backup : --online performs an online backup; this option is only available with data-only backups --logs includes the IdM service log files in the backup For further information on using ipa-backup , see the ipa-backup (1) man page. 9.1.1.1. Working Around Insufficient Space on Volumes Involved During Backup This section describes how to address problems if directories involved in the IdM backup process are stored on volumes with insufficient free space. Insufficient Space on the Volume That Contains /var/lib/ipa/backup/ If the /var/lib/ipa/backup/ directory is stored on a volume with insufficient free space, it is not possible to create a backup. To address the problem, use one of the following workarounds: Create a directory on a different volume and link it to /var/lib/ipa/backup/ . For example, if /home is stored on a different volume with enough free space: Create a directory, such as /home/idm/backup/ : Set the following permissions to the directory: If /var/lib/ipa/backup/ contains existing backups, move them to the new directory: Remove the /var/lib/ipa/backup/ directory: Create the /var/lib/ipa/backup/ link to the /home/idm/backup/ directory: Mount a directory stored on a different volume to /var/lib/ipa/backup/ . For example, if /home is stored on a different volume with enough free space, create /home/idm/backup/ and mount it to /var/lib/ipa/backup/ : Create the /home/idm/backup/ directory: Set the following permissions to the directory: If /var/lib/ipa/backup/ contains existing backups, move them to the new directory: Mount /home/idm/backup/ to /var/lib/ipa/backup/ : To automatically mount /home/idm/backup/ to /var/lib/ipa/backup/ when the system boots, append the following to the /etc/fstab file: Insufficient Space on the Volume That Contains /tmp If the backup fails due to insufficient space being available in the /tmp directory, change the location of the staged files to be created during the backup by using the TMPDIR environment variable: For more details, see the ipa-backup command fails to finish Knowledgebase solution. 9.1.2. Encrypting Backup You can encrypt the IdM backup using the GNU Privacy Guard (GPG) encryption. To create a GPG key: Create a keygen file containing the key details, for example, by running cat >keygen <<EOF and providing the required encryption details to the file from the command line: Generate a new key pair called backup and feed the contents of keygen to the command. The following example generates a key pair with the path names /root/backup.sec and /root/backup.pub : To create a GPG-encrypted backup, pass the generated backup key to ipa-backup by supplying the following options: --gpg , which instructs ipa-backup to perform the encrypted backup --gpg-keyring=GPG_KEYRING , which provides the full path to the GPG keyring without the file extension. For example: Note You might experience problems if your system uses the gpg2 utility to generate GPG keys because gpg2 requires an external program to function. To generate the key purely from console in this situation, add the pinentry-program /usr/bin/pinentry-curses line to the .gnupg/gpg-agent.conf file before generating a key. 9.1.3. List of Directories and Files Copied During Backup Directories: Files: Log files and directories: | [
"mkdir -p /home/idm/backup/",
"chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/",
"mv /var/lib/ipa/backup/* /home/idm/backup/",
"rm -rf /var/lib/ipa/backup/",
"ln -s /home/idm/backup/ /var/lib/ipa/backup/",
"mkdir -p /home/idm/backup/",
"chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/",
"mv /var/lib/ipa/backup/* /home/idm/backup/",
"mount -o bind /home/idm/backup/ /var/lib/ipa/backup/",
"/home/idm/backup/ /var/lib/ipa/backup/ none bind 0 0",
"TMPDIR= /path/to/backup ipa-backup",
"cat >keygen <<EOF > %echo Generating a standard key > Key-Type: RSA > Key-Length:2048 > Name-Real: IPA Backup > Name-Comment: IPA Backup > Name-Email: [email protected] > Expire-Date: 0 > %pubring /root/backup.pub > %secring /root/backup.sec > %commit > %echo done > EOF",
"gpg --batch --gen-key keygen gpg --no-default-keyring --secret-keyring /root/backup.sec --keyring /root/backup.pub --list-secret-keys",
"ipa-backup --gpg --gpg-keyring=/root/backup",
"/usr/share/ipa/html /root/.pki /etc/pki-ca /etc/pki/pki-tomcat /etc/sysconfig/pki /etc/httpd/alias /var/lib/pki /var/lib/pki-ca /var/lib/ipa/sysrestore /var/lib/ipa-client/sysrestore /var/lib/ipa/dnssec /var/lib/sss/pubconf/krb5.include.d/ /var/lib/authconfig/last /var/lib/certmonger /var/lib/ipa /var/run/dirsrv /var/lock/dirsrv",
"/etc/named.conf /etc/named.keytab /etc/resolv.conf /etc/sysconfig/pki-ca /etc/sysconfig/pki-tomcat /etc/sysconfig/dirsrv /etc/sysconfig/ntpd /etc/sysconfig/krb5kdc /etc/sysconfig/pki/ca/pki-ca /etc/sysconfig/ipa-dnskeysyncd /etc/sysconfig/ipa-ods-exporter /etc/sysconfig/named /etc/sysconfig/ods /etc/sysconfig/authconfig /etc/ipa/nssdb/pwdfile.txt /etc/pki/ca-trust/source/ipa.p11-kit /etc/pki/ca-trust/source/anchors/ipa-ca.crt /etc/nsswitch.conf /etc/krb5.keytab /etc/sssd/sssd.conf /etc/openldap/ldap.conf /etc/security/limits.conf /etc/httpd/conf/password.conf /etc/httpd/conf/ipa.keytab /etc/httpd/conf.d/ipa-pki-proxy.conf /etc/httpd/conf.d/ipa-rewrite.conf /etc/httpd/conf.d/nss.conf /etc/httpd/conf.d/ipa.conf /etc/ssh/sshd_config /etc/ssh/ssh_config /etc/krb5.conf /etc/ipa/ca.crt /etc/ipa/default.conf /etc/dirsrv/ds.keytab /etc/ntp.conf /etc/samba/smb.conf /etc/samba/samba.keytab /root/ca-agent.p12 /root/cacert.p12 /var/kerberos/krb5kdc/kdc.conf /etc/systemd/system/multi-user.target.wants/ipa.service /etc/systemd/system/multi-user.target.wants/sssd.service /etc/systemd/system/multi-user.target.wants/certmonger.service /etc/systemd/system/pki-tomcatd.target.wants/[email protected] /var/run/ipa/services.list /etc/opendnssec/conf.xml /etc/opendnssec/kasp.xml /etc/ipa/dnssec/softhsm2.conf /etc/ipa/dnssec/softhsm_pin_so /etc/ipa/dnssec/ipa-ods-exporter.keytab /etc/ipa/dnssec/ipa-dnskeysyncd.keytab /etc/idm/nssdb/cert8.db /etc/idm/nssdb/key3.db /etc/idm/nssdb/secmod.db /etc/ipa/nssdb/cert8.db /etc/ipa/nssdb/key3.db /etc/ipa/nssdb/secmod.db",
"/var/log/pki-ca /var/log/pki/ /var/log/dirsrv/slapd-PKI-IPA /var/log/httpd /var/log/ipaserver-install.log /var/log/kadmind.log /var/log/pki-ca-install.log /var/log/messages /var/log/ipaclient-install.log /var/log/secure /var/log/ipaserver-uninstall.log /var/log/pki-ca-uninstall.log /var/log/ipaclient-uninstall.log /var/named/data/named.run"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/backup-restore |
Chapter 3. Installing Ansible development tools | Chapter 3. Installing Ansible development tools Red Hat provides two options for installing Ansible development tools. Installation on a RHEL container running inside VS Code. You can install this option on MacOS, Windows, and Linux systems. Installation on your local RHEL system using an RPM (Red Hat Package Manager) package. 3.1. Requirements To install and use Ansible development tools, you must meet the following requirements. Extra requirements for Windows installations and containerized installations are indicated in the procedures. Python 3.10 or later. VS Code (Visual Studio Code) with the Ansible extension added. See Installing VS Code . For containerized installations, the Micorsoft Dev Containers VS Code extension. See Installing and configuring the Dev Containers extension . A containerization platform, for example Podman, Podman Desktop, Docker, or Docker Desktop. Note The installation procedure for Ansible development tools on Windows covers the use of Podman Desktop only. See Installing Podman Desktop on a Windows machine . You have a Red Hat account and you can log in to the Red Hat container registry at registry.redhat.io . For information about logging in to registry.redhat.io , see Authenticating with the Red Hat container registry . 3.1.1. Requirements for Ansible development tools on Windows If you are installing Ansible development tools on a container in VS Code on Windows, there are extra requirements: Windows Subsystem for Linux(WSL2) Podman Desktop 3.1.1.1. Installing WSL Install WSL2 without a distribution: USD `wsl --install --no-distribution` Use cgroupsv2 by disabling cgroupsv1 for WSL2: Edit the %USERPROFILE%/wsl.conf file and add the following lines to force cgroupv2 usage: [wsl2] kernelCommandLine = cgroup_no_v1="all" 3.1.1.2. Installing Podman Desktop on a Windows machine Install Podman Desktop. Follow the instructions in Installing Podman Desktop and Podman on Windows in the Podman Desktop documentation. You do not need to change the default settings in the set-up wizard. Ensure the podman machine is using cgroupsv2 : USD podman info | findstr cgroup Test Podman Desktop: USD podman run hello 3.1.1.3. Configuring settings for Podman Desktop Add a %USERPROFILE%\bin\docker.bat file with the following content: @echo off podman %* This avoids having to install Docker as required by the VS Code Dev Container extension. Add the %USERPROFILE%\bin directory to the PATH . Select Settings and search for "Edit environment variables for your account" to display all of the user environment variables. Highlight "Path" in the top user variables box, click Edit and add the path. Click Save to set the path for any new console that you open. 3.1.2. Authenticating with the Red Hat container registry All container images available through the Red Hat container catalog are hosted on an image registry, registry.redhat.io . The registry requires authentication for access to images. To use the registry.redhat.io registry, you must have a Red Hat login. This is the same account that you use to log in to the Red Hat Customer Portal (access.redhat.com) and manage your Red Hat subscriptions. Note If you are planning to install the Ansible development tools on a container inside VS Code, you must log in to registry.redhat.io before launching VS Code so that VS Code can pull the devtools container from registry.redhat.io . If you are running Ansible development tools on a container inside VS Code and you want to pull execution environments or the devcontainer to use as an execution environment, you must log in from a terminal prompt within the devcontainer from a terminal inside VS Code. You can use the podman login or docker login commands with your credentials to access content on the registry. Podman USD podman login registry.redhat.io Username: my__redhat_username Password: *********** Docker USD docker login registry.redhat.io Username: my__redhat_username Password: *********** For more information about Red Hat container registry authentication, see Red Hat Container Registry Authentication on the Red Hat customer portal. 3.1.3. Installing VS Code To install VS Code, follow the instructions on the Download Visual Studio Code page in the Visual Studio Code documentation. 3.1.4. Installing the VS Code Ansible extension The Ansible extension adds language support for Ansible to VS Code. It incorporates Ansible development tools to facilitate creating and running automation content. For a full description of the Ansible extension, see the Visual Studio Code Marketplace . See Learning path - Getting Started with the Ansible VS Code Extension for tutorials on working with the extension. To install the Ansible VS Code extension: Open VS Code. Click the Extensions ( ) icon in the Activity Bar, or click View Extensions , to display the Extensions view. In the search field in the Extensions view, type Ansible Red Hat . Select the Ansible extension and click Install . When the language for a file is recognized as Ansible, the Ansible extension provides features such as auto-completion, hover, diagnostics, and goto. The language identified for a file is displayed in the Status bar at the bottom of the VS Code window. The following files are assigned the Ansible language: YAML files in a /playbooks directory Files with the following double extension: .ansible.yml or .ansible.yaml Certain YAML names recognized by Ansible, for example site.yml or site.yaml YAML files whose filename contains "playbook": playbook .yml or playbook .yaml If the extension does not identify the language for your playbook files as Ansible, follow the procedure in Associating the Ansible language to YAML files . 3.1.5. Configuring Ansible extension settings The Ansible extension supports multiple configuration options. You can configure the settings for the extension on a user level, on a workspace level, or for a particular directory. User-based settings are applied globally for any instance of VS Code that is opened. Workspace settings are stored within your workspace and only apply when the current workspace is opened. It is useful to configure settings for your workspace for the following reasons: If you define and maintain configurations specific to your playbook project, you can customize your Ansible development environment for individual projects without altering your preferred setup for other work. You can have different settings for a Python project, an Ansible project, and a C++ project, each optimized for the respective stack without the need to manually reconfigure settings each time you switch projects. If you include workspace settings when setting up version control for a project you want to share with your team, everyone uses the same configuration for that project. Procedure Open the Ansible extension settings: Click the 'Extensions' icon in the activity bar. Select the Ansible extension, and click the 'gear' icon and then Extension Settings to display the extension settings. Alternatively, click Code Settings Settings to open the Settings page. Enter Ansible in the search bar to display the settings for the extension. Select the Workspace tab to configure your settings for the current VS Code workspace. The Ansible extension settings are pre-populated. Modify the settings to suit your requirements: Check the Ansible Validation Lint: Enabled box to enable ansible-lint. Check the Ansible Execution Environment: Enabled box to use an execution environment. Specify the execution environment image you want to use in the Ansible > Execution Environment: image field. To use Red Hat Ansible Lightspeed, check the Ansible > Lightspeed: Enabled box, and enter the URL for Lightspeed. The settings are documented on the Ansible VS Code Extension by Red Hat page in the VisualStudio marketplace documentation. 3.1.6. Associating the Ansible language to YAML files The Ansible VS Code extension works only when the language associated with a file is set to Ansible. The extension provides features that help create Ansible playbooks, such as auto-completion, hover, and diagnostics. The Ansible VS Code extension automatically associates the Ansible language with some files. The procedures below describe how to set the language for files that are not recognized as Ansible files. Manually associating the Ansible language to YAML files The following procedure describes how to manually assign the Ansible language to a YAML file that is open in VS Code. Open or create a YAML file in VS Code. Hover the cursor over the language identified in the status bar at the bottom of the VS Code window to open the Select Language Mode list. Select Ansible in the list. The language shown in the status bar at the bottom of the VS Code window for the file is changed to Ansible. Adding persistent file association for the Ansible language to settings.json Alternatively, you can add file association for the Ansible language in your settings.json file. Open the settings.json file: Click View Command Palette to open the command palette. Enter Workspace settings in the search box and select Open Workspace Settings (JSON) . Add the following code to settings.json . { ... "files.associations": { "*plays.yml": "ansible", "*init.yml": "yaml", } } 3.1.7. Installing and configuring the Dev Containers extension If you are installing the containerized version of Ansible development tools, you must install the Microsoft Dev Containers extension in VS Code. Open VS Code. Click the Extensions ( ) icon in the Activity Bar, or click View Extensions , to display the Extensions view. In the search field in the Extensions view, type Dev Containers . Select the Dev Containers extension from Microsoft and click Install . If you are using Podman or Podman Desktop as your containerization platform, you must modify the default settings in the Dev Containers extension. Replace docker with podman in the Dev Containers extension settings: In VS Code, open the settings editor. Search for @ext:ms-vscode-remote.remote-containers . Alternatively, click the Extensions icon in the activity bar and click the gear icon for the Dev Containers extension. Set Dev > Containers:Docker Path to podman . Set Dev > Containers:Docker Compose Path to podman-compose . 3.2. Installing Ansible development tools on a container inside VS Code The Dev Containers VS Code extension requires a .devcontainer file to store settings for your dev containers. You must use the Ansible extension to scaffold a config file for your dev container, and reopen your directory in a container in VS Code. Prerequisites You have installed a containerization platform, for example Podman, Podman Desktop, Docker, or Docker Desktop. You have a Red Hat login and you have logged in to the Red Hat registry at registry.redhat.io . For information about logging in to registry.redhat.io , see Authenticating with the Red Hat container registry . You have installed VS Code. You have installed the Ansible extension in VS Code. You have installed the Microsoft Dev Containers extension in VS Code. If you are installing Ansible development tools on Windows, launch VS Code and connect to the WSL machine: Click the Remote ( ) icon. In the dropdown menu that appears, select the option to connect to the WSL machine. Procedure In VS Code, navigate to your project directory. Click the Ansible icon in the VS Code activity bar to open the Ansible extension. In the Ansible Development Tools section of the Ansible extension, scroll down to the ADD option and select Devcontainer . In the Create a devcontainer page, select the Downstream container image from the Container image options. This action adds devcontainer.json files for both Podman and Docker in a .devcontainer directory. Reopen or reload the project directory: If VS Code detects that your directory contains a devcontainer.json file, the following notification appears: Click Reopen in Container . If the notification does not appear, click the Remote ( ) icon. In the dropdown menu that appears, select Reopen in Container . Select the dev container for Podman or Docker according to the containerization platform you are using. The Remote () status in the VS Code Status bar displays opening Remote and a notification indicates the progress in opening the container. Verification When the directory reopens in a container, the Remote () status displays Dev Container: ansible-dev-container . Note The base image for the container is a Universal Base Image Minimal (UBI Minimal) image that uses microdnf as a package manager. The dnf and yum package managers are not available in the container. For information about using microdnf in containers based on UBI Minimal images, see Adding software in a minimal UBI container in the Red Hat Enterprise Linux Building, running, and managing containers guide. 3.3. Installing Ansible development tools from a package on RHEL Ansible development tools is bundled in the Ansible Automation Platform RPM (Red Hat Package Manager) package. Refer to the Red Hat Ansible Automation Platform Installation guide documentation for information on installing Ansible Automation Platform. Prerequisites You have installed RHEL. You have registered your system with Red Hat Subscription Manager. You have installed a containerization platform, for example Podman or Docker. Procedure Run the following command to check whether Simple Content Access (SCA) is enabled: USD sudo subscription-manager status If Simple Content Access is enabled, the output contains the following message: Content Access Mode is set to Simple Content Access. If Simple Content Access is not enabled, attach the Red Hat Ansible Automation Platform SKU: USD sudo subscription-manager attach --pool=<sku-pool-id> Install Ansible development tools with the following command: USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-dev-tools USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-dev-tools Verification: Verify that the Ansible development tools components have been installed: USD rpm -aq | grep ansible The output displays the Ansible packages that are installed: ansible-sign-0.1.1-2.el9ap.noarch ansible-creator-24.4.1-1.el9ap.noarch python3.11-ansible-runner-2.4.0-0.1.20240412.git764790f.el9ap.noarch ansible-runner-2.4.0-0.1.20240412.git764790f.el9ap.noarch ansible-builder-3.1.0-0.2.20240413.git167ed5c.el9ap.noarch ansible-dev-environment-24.1.0-2.el9ap.noarch ansible-core-2.16.6-0.1.20240413.gite636132.el9ap.noarch python3.11-ansible-compat-4.1.11-2.el9ap.noarch python3.11-pytest-ansible-24.1.2-1.el9ap.noarch ansible-lint-6.14.3-4.el9ap.noarch ansible-navigator-3.4.1-2.el9ap.noarch python3.11-tox-ansible-24.2.0-1.el9ap.noarch ansible-dev-tools-2.5-2.el9ap.noarch On successful installation, you can view the help documentation for ansible-creator: USD ansible-creator --help usage: ansible-creator [-h] [--version] command ... The fastest way to generate all your ansible content. Positional arguments: command add Add resources to an existing Ansible project. init Initialize a new Ansible project. Options: --version Print ansible-creator version and exit. -h --help Show this help message and exit | [
"`wsl --install --no-distribution`",
"[wsl2] kernelCommandLine = cgroup_no_v1=\"all\"",
"podman info | findstr cgroup",
"podman run hello",
"@echo off %*",
"podman login registry.redhat.io Username: my__redhat_username Password: ***********",
"docker login registry.redhat.io Username: my__redhat_username Password: ***********",
"{ \"files.associations\": { \"*plays.yml\": \"ansible\", \"*init.yml\": \"yaml\", } }",
"sudo subscription-manager status",
"Content Access Mode is set to Simple Content Access.",
"sudo subscription-manager attach --pool=<sku-pool-id>",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-dev-tools",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-dev-tools",
"rpm -aq | grep ansible",
"ansible-sign-0.1.1-2.el9ap.noarch ansible-creator-24.4.1-1.el9ap.noarch python3.11-ansible-runner-2.4.0-0.1.20240412.git764790f.el9ap.noarch ansible-runner-2.4.0-0.1.20240412.git764790f.el9ap.noarch ansible-builder-3.1.0-0.2.20240413.git167ed5c.el9ap.noarch ansible-dev-environment-24.1.0-2.el9ap.noarch ansible-core-2.16.6-0.1.20240413.gite636132.el9ap.noarch python3.11-ansible-compat-4.1.11-2.el9ap.noarch python3.11-pytest-ansible-24.1.2-1.el9ap.noarch ansible-lint-6.14.3-4.el9ap.noarch ansible-navigator-3.4.1-2.el9ap.noarch python3.11-tox-ansible-24.2.0-1.el9ap.noarch ansible-dev-tools-2.5-2.el9ap.noarch",
"ansible-creator --help usage: ansible-creator [-h] [--version] command The fastest way to generate all your ansible content. Positional arguments: command add Add resources to an existing Ansible project. init Initialize a new Ansible project. Options: --version Print ansible-creator version and exit. -h --help Show this help message and exit"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/installing-devtools |
Chapter 7. Building Dockerfiles | Chapter 7. Building Dockerfiles Red Hat Quay supports the ability to build Dockerfiles on our build fleet and push the resulting image to the repository. 7.1. Viewing and managing builds Repository Builds can be viewed and managed by clicking the Builds tab in the Repository View . 7.2. Manually starting a build To manually start a repository build, click the + icon in the top right of the header on any repository page and choose New Dockerfile Build . An uploaded Dockerfile , .tar.gz , or an HTTP URL to either can be used for the build. Note You will not be able to specify the Docker build context when manually starting a build. 7.3. Build Triggers Repository builds can also be automatically triggered by events such as a push to an SCM (GitHub, BitBucket or GitLab) or via a call to a webhook . 7.3.1. Creating a new build trigger To setup a build trigger, click the Create Build Trigger button on the Builds view page and follow the instructions of the dialog. You will need to grant Red Hat Quay access to your repositories in order to setup the trigger and your account requires admin access on the SCM repository . 7.3.2. Manually triggering a build trigger To trigger a build trigger manually, click the icon to the build trigger and choose Run Now . 7.3.3. Build Contexts When building an image with Docker, a directory is specified to become the build context. This holds true for both manual builds and build triggers because the builds conducted by Red Hat Quay are no different from running docker build on your own machine. Red Hat Quay build contexts are always the specified subdirectory from the build setup and fallback to the root of the build source if none is specified. When a build is triggered, Red Hat Quay build workers clone the git repository to the worker machine and enter the build context before conducting a build. For builds based on tar archives, build workers extract the archive and enter the build context. For example: Imagine the example above is the directory structure for a GitHub repository called "example". If no subdirectory is specified in the build trigger setup or while manually starting a build, the build will operate in the example directory. If subdir is specified to be the subdirectory in the build trigger setup, only the Dockerfile within it is visible to the build. This means that you cannot use the ADD command in the Dockerfile to add file , because it is outside of the build context. Unlike the Docker Hub, the Dockerfile is part of the build context on Red Hat Quay. Thus, it must not appear in the .dockerignore file. | [
"example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/building_dockerfiles |
Chapter 20. Managing Guest Virtual Machines with virsh | Chapter 20. Managing Guest Virtual Machines with virsh virsh is a command-line interface tool for managing guest virtual machines, and works as the primary means of controlling virtualization on Red Hat Enterprise Linux 7. The virsh command-line tool is built on the libvirt management API, and can be used to create, deploy, and manage guest virtual machines. The virsh utility is ideal for creating virtualization administration scripts, and users without root privileges can use it in read-only mode. The virsh package is installed with yum as part of the libvirt-client package. For installation instructions, see Section 2.2.1, "Installing Virtualization Packages Manually" . For a general introduction of virsh, including a practical demonstration, see the Virtualization Getting Started Guide The remaining sections of this chapter cover the virsh command set in a logical order based on usage. Note Note that when using the help or when reading the man pages, the term 'domain' will be used instead of the term guest virtual machine. This is the term used by libvirt . In cases where the screen output is displayed and the word 'domain' is used, it will not be switched to guest or guest virtual machine. In all examples, the guest virtual machine 'guest1' will be used. You should replace this with the name of your guest virtual machine in all cases. When creating a name for a guest virtual machine you should use a short easy to remember integer (0,1,2...), a text string name, or in all cases you can also use the virtual machine's full UUID. Important It is important to note which user you are using. If you create a guest virtual machine using one user, you will not be able to retrieve information about it using another user. This is especially important when you create a virtual machine in virt-manager. The default user is root in that case unless otherwise specified. Should you have a case where you cannot list the virtual machine using the virsh list --all command, it is most likely due to you running the command using a different user than you used to create the virtual machine. See Important for more information. 20.1. Guest Virtual Machine States and Types Several virsh commands are affected by the state of the guest virtual machine: Transient - A transient guest does not survive reboot. Persistent - A persistent guest virtual machine survives reboot and lasts until it is deleted. During the life cycle of a virtual machine, libvirt will classify the guest as any of the following states: Undefined - This is a guest virtual machine that has not been defined or created. As such, libvirt is unaware of any guest in this state and will not report about guest virtual machines in this state. Shut off - This is a guest virtual machine which is defined, but is not running. Only persistent guests can be considered shut off. As such, when a transient guest virtual machine is put into this state, it ceases to exist. Running - The guest virtual machine in this state has been defined and is currently working. This state can be used with both persistent and transient guest virtual machines. Paused - The guest virtual machine's execution on the hypervisor has been suspended, or its state has been temporarily stored until it is resumed. Guest virtual machines in this state are not aware they have been suspended and do not notice that time has passed when they are resumed. Saved - This state is similar to the paused state, however the guest virtual machine's configuration is saved to persistent storage. Any guest virtual machine in this state is not aware it is paused and does not notice that time has passed once it has been restored. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-managing_guest_virtual_machines_with_virsh |
Chapter 5. Converting physical machines to virtual machines | Chapter 5. Converting physical machines to virtual machines Warning The Red Hat Enterprise Linux 6 version of the virt-v2v utility has been deprecated. Users of Red Hat Enterprise Linux 6 are advised to create a Red Hat Enterprise 7 virtual machine, and install virt-v2v in that virtual machine. The Red Hat Enterprise Linux 7 version is fully supported and documented in virt-v2v Knowledgebase articles . Read this chapter for information about converting physical machines to virtual machines with the Red Hat Physical-to-Virtual (P2V) solution, Virt P2V. Virt P2V is comprised of both virt-p2v-server , included in the virt-v2v package, and the P2V client, available from the Red Hat Customer Portal as rhel-6.x-p2v.iso . rhel-6.x-p2v.iso is a bootable disk image based on a customized Red Hat Enterprise Linux 6 image. Booting a machine from rhel-6.x-p2v.iso and connecting to a V2V conversion server that has virt-v2v installed allows data from the physical machine to be uploaded to the conversion server and converted for use with either Red Hat Enterprise Virtualization, or KVM managed by libvirt . Note that the host must be running Red Hat Enterprise Linux 6. Other host configurations will not work. Important Adhere to the following rules. Failure to do so may cause the loss of data and disk malfunction: The Physical to Virtual (P2V) feature requires a Red Hat Enterprise Linux 6 virtualization host with virt-v2v version 0.8.7 or later. You can check your version of virt-v2v by running USD rpm -q virt-v2v . Note that you cannot convert to a Red Hat Enterprise Linux 5 conversion server, or with a virt-v2v package to version 0.8.7-6.el6. A number of operating systems can be converted from physical machines to virtual machines, but be aware that there are known issues converting physical machines using software RAID. Red Hat Enterprise Linux 6 machines with a filesystem root on a software RAID md device may be converted to guest virtual machines. Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5 physical machines with their filesystem root on a software RAID md device cannot be converted to virtual machines. There is currently no workaround available. 5.1. Prerequisites For a physical machine to be converted using the P2V client, it must meet basic hardware requirements in order to successfully boot the P2V client: Must be bootable from PXE, Optical Media (CD, DVD), or USB. At least 512 MB of RAM. An ethernet connection. Console access (keyboard, video, mouse). An operating system supported by virt-v2v : Red Hat Enterprise Linux 3.9 Red Hat Enterprise Linux 4 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 Windows XP Windows Vista Windows 7 Windows Server 2003 Windows Server 2008 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/chap-V2V_Guide-P2V_Migration_Converting_Physical_Machines_to_Virtual_Machines |
7.50. ethtool | 7.50. ethtool 7.50.1. RHBA-2013:0366 - ethtool bug fix and enhancement update Updated ethtool packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The ethtool utility allows the querying and changing of Ethernet adapter settings, such as port speed, auto-negotiation, and device-specific performance options. Note The ethtool packages have been upgraded to upstream version 3.5, which provides a number of bug fixes and enhancements over the version. (BZ#819846) All users of ethtool are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ethtool |
Chapter 4. Configuring kdump on the command line | Chapter 4. Configuring kdump on the command line The memory for kdump is reserved during the system boot. You can configure the memory size in the system's Grand Unified Bootloader (GRUB) configuration file. The memory size depends on the crashkernel= value specified in the configuration file and the size of the physical memory of system. 4.1. Estimating the kdump size When planning and building your kdump environment, it is important to know the space required by the crash dump file. The makedumpfile --mem-usage command estimates the space required by the crash dump file. It generates a memory usage report. The report helps you decide the dump level and the pages that are safe to exclude. Procedure Enter the following command to generate a memory usage report: Important The makedumpfile --mem-usage command reports required memory in pages. This means that you must calculate the size of memory in use against the kernel page size. 4.2. Configuring kdump memory usage on RHEL 9 The kexec-tools package maintains the default crashkernel= memory reservation values. The kdump service uses the default value to reserve the crash kernel memory for each kernel. The default value can also serve as the reference base value to estimate the required memory size when you set the crashkernel= value manually. The minimum size of the crash kernel can vary depending on the hardware and machine specifications. The automatic memory allocation for kdump also varies based on the system hardware architecture and available memory size. For example, on AMD64 and Intel 64-bit architectures, the default value for the crashkernel= parameter will work only when the available memory is more than 1 GB. The kexec-tools utility configures the following default memory reserves on AMD64 and Intel 64-bit architecture: You can also run kdumpctl estimate to get an approximate value without triggering a crash. The estimated crashkernel= value might not be an exact one but can serve as a reference to set an appropriate crashkernel= value. Note The crashkernel=auto option in the boot command line is no longer supported on RHEL 9 and later releases. Prerequisites You have root permissions on the system. You have fulfilled kdump requirements for configurations and targets. For details, see Supported kdump configurations and targets . You have installed the zipl utility if it is the IBM Z system. Procedure Configure the default value for crash kernel: When configuring the crashkernel= value, test the configuration by rebooting the system with kdump enabled. If the kdump kernel fails to boot, increase the memory size gradually to set an acceptable value. To use a custom crashkernel= value: Configure the required memory reserve. Optionally, you can set the amount of reserved memory to a variable depending on the total amount of installed memory by using the syntax crashkernel= <range1>:<size1>,<range2>:<size2> . For example: The example reserves 192 MB of memory if the total amount of system memory is 1 GB or higher and lower than 4 GB. If the total amount of memory is more than 4 GB, 256 MB is reserved for kdump . Optional: Offset the reserved memory. Some systems require to reserve memory with a certain fixed offset since crashkernel reservation is very early, and it wants to reserve some area for special usage. If the offset is set, the reserved memory begins there. To offset the reserved memory, use the following syntax: The example reserves 192 MB of memory starting at 16 MB (physical address 0x01000000). If you offset to 0 or do not specify a value, kdump offsets the reserved memory automatically. You can also offset memory when setting a variable memory reservation by specifying the offset as the last value. For example, crashkernel=1G-4G:192M,2G-64G:256M@16M . Update the boot loader configuration: The <custom-value> must contain the custom crashkernel= value that you have configured for the crash kernel. Reboot for changes to take effect: Verification The commands to test kdump configuration will cause the kernel to crash with data loss. Follow the instructions with care. You must not use an active production system to test the kdump configuration. Cause the kernel to crash by activating the sysrq key. The address-YYYY-MM-DD-HH:MM:SS/vmcore file is saved to the target location as specified in the /etc/kdump.conf file. If you select the default target location, the vmcore file is saved in the partition mounted under /var/crash/ . Activate the sysrq key to boot into the kdump kernel: The command causes kernel to crash and reboots the kernel if required. Display the /etc/kdump.conf file and check if the vmcore file is saved in the target destination. Additional resources How to manually modify the boot parameter in grub before the system boots . grubby(8) man page on your system. 4.3. Configuring the kdump target The crash dump is usually stored as a file in a local file system, written directly to a device. Optionally, you can send crash dump over a network by using the NFS or SSH protocols. Only one of these options to preserve a crash dump file can be set at a time. The default behavior is to store it in the /var/crash/ directory of the local file system. Prerequisites You have root permissions on the system. Fulfilled requirements for kdump configurations and targets. For details, see Supported kdump configurations and targets . Procedure To store the crash dump file in /var/crash/ directory of the local file system, edit the /etc/kdump.conf file and specify the path: The option path /var/crash represents the path to the file system in which kdump saves the crash dump file. Note When you specify a dump target in the /etc/kdump.conf file, then the path is relative to the specified dump target. When you do not specify a dump target in the /etc/kdump.conf file, then the path represents the absolute path from the root directory. Depending on the file system mounted in the current system, the dump target and the adjusted dump path are configured automatically. To secure the crash dump file and the accompanying files produced by kdump , you should set up proper attributes for the target destination directory, such as user permissions and SELinux contexts. Additionally, you can define a script, for example kdump_post.sh in the kdump.conf file as follows: The kdump_post directive specifies a shell script or a command that executes after kdump has completed capturing and saving a crash dump to the specified destination. You can use this mechanism to extend the functionality of kdump to perform actions including the adjustments in file permissions. The kdump target configuration The dump target is specified ( ext4 /dev/mapper/vg00-varcrashvol ), and, therefore, it is mounted at /var/crash . The path option is also set to /var/crash . Therefore, the kdump saves the vmcore file in the /var/crash/var/crash directory. To change the local directory for saving the crash dump, edit the /etc/kdump.conf configuration file as a root user: Remove the hash sign ( # ) from the beginning of the #path /var/crash line. Replace the value with the intended directory path. For example: Important In RHEL 9, the directory defined as the kdump target using the path directive must exist when the kdump systemd service starts to avoid failures. Unlike in earlier versions of RHEL, the directory is no longer created automatically if it does not exist when the service starts. To write the file to a different partition, edit the /etc/kdump.conf configuration file: Remove the hash sign ( # ) from the beginning of the #ext4 line, depending on your choice. device name (the #ext4 /dev/vg/lv_kdump line) file system label (the #ext4 LABEL=/boot line) UUID (the #ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937 line) Change the file system type and the device name, label or UUID, to the required values. The correct syntax for specifying UUID values is both UUID="correct-uuid" and UUID=correct-uuid . For example: Important It is recommended to specify storage devices by using a LABEL= or UUID= . Disk device names such as /dev/sda3 are not guaranteed to be consistent across reboot. When you use Direct Access Storage Device (DASD) on IBM Z hardware, ensure the dump devices are correctly specified in /etc/dasd.conf before proceeding with kdump . To write the crash dump directly to a device, edit the /etc/kdump.conf configuration file: Remove the hash sign ( # ) from the beginning of the #raw /dev/vg/lv_kdump line. Replace the value with the intended device name. For example: To store the crash dump to a remote machine by using the NFS protocol: Remove the hash sign ( # ) from the beginning of the #nfs my.server.com:/export/tmp line. Replace the value with a valid hostname and directory path. For example: Restart the kdump service for the changes to take effect: Note While using the NFS directive to specify the NFS target, kdump.service automatically attempts to mount the NFS target to check the disk space. There is no need to mount the NFS target in advance. To prevent kdump.service from mounting the target, use the dracut_args --mount directive in kdump.conf . This will enable kdump.service to call the dracut utility with the --mount argument to specify the NFS target. To store the crash dump to a remote machine by using the SSH protocol: Remove the hash sign ( # ) from the beginning of the #ssh [email protected] line. Replace the value with a valid username and hostname. Include your SSH key in the configuration. Remove the hash sign from the beginning of the #sshkey /root/.ssh/kdump_id_rsa line. Change the value to the location of a key valid on the server you are trying to dump to. For example: Additional resources Files produced by kdump after system crash . 4.4. Configuring the kdump core collector The kdump service uses a core_collector program to capture the crash dump image. In RHEL, the makedumpfile utility is the default core collector. It helps shrink the dump file by: Compressing the size of a crash dump file and copying only necessary pages by using various dump levels. Excluding unnecessary crash dump pages. Filtering the page types to be included in the crash dump. Note Crash dump file compression is enabled by default in the RHEL 7 and above. If you need to customize the crash dump file compression, follow this procedure. Syntax Options -c , -l or -p : specify compress dump file format by each page using either, zlib for -c option, lzo for -l option or snappy for -p option. -d (dump_level) : excludes pages so that they are not copied to the dump file. --message-level : specify the message types. You can restrict outputs printed by specifying message_level with this option. For example, specifying 7 as message_level prints common messages and error messages. The maximum value of message_level is 31. Prerequisites You have root permissions on the system. Fulfilled requirements for kdump configurations and targets. For details, see Supported kdump configurations and targets . Procedure As a root , edit the /etc/kdump.conf configuration file and remove the hash sign ("#") from the beginning of the #core_collector makedumpfile -l --message-level 1 -d 31 . Enter the following command to enable crash dump file compression: The -l option specifies the dump compressed file format. The -d option specifies dump level as 31. The --message-level option specifies message level as 1. Also, consider following examples with the -c and -p options: To compress a crash dump file by using -c : To compress a crash dump file by using -p : Additional resources makedumpfile(8) man page on your system Configuration file for kdump 4.5. Configuring the kdump default failure responses By default, when kdump fails to create a crash dump file at the configured target location, the system reboots and the dump is lost in the process. You can change the default failure response and configure kdump to perform a different operation when it fails to save the core dump to the primary target. The additional actions are: dump_to_rootfs Saves the core dump to the root file system. reboot Reboots the system, losing the core dump in the process. halt Stops the system, losing the core dump in the process. poweroff Power the system off, losing the core dump in the process. shell Runs a shell session from within the initramfs , you can record the core dump manually. final_action Enables additional operations such as reboot , halt , and poweroff after a successful kdump or when shell or dump_to_rootfs failure action completes. The default is reboot . failure_action Specifies the action to perform when a dump might fail in a kernel crash. The default is reboot . Prerequisites Root permissions. Fulfilled requirements for kdump configurations and targets. For details, see Supported kdump configurations and targets . Procedure As a root user, remove the hash sign ( # ) from the beginning of the #failure_action line in the /etc/kdump.conf configuration file. Replace the value with a required action. Additional resources Configuring the kdump target 4.6. Configuration file for kdump The configuration file for kdump kernel is /etc/sysconfig/kdump . This file controls the kdump kernel command line parameters. For most configurations, use the default options. However, in some scenarios you might need to modify certain parameters to control the kdump kernel behavior. For example, modifying the KDUMP_COMMANDLINE_APPEND option to append the kdump kernel command-line to obtain a detailed debugging output or the KDUMP_COMMANDLINE_REMOVE option to remove arguments from the kdump command line. KDUMP_COMMANDLINE_REMOVE This option removes arguments from the current kdump command line. It removes parameters that can cause kdump errors or kdump kernel boot failures. These parameters might have been parsed from the KDUMP_COMMANDLINE process or inherited from the /proc/cmdline file. When this variable is not configured, it inherits all values from the /proc/cmdline file. Configuring this option also provides information that is helpful in debugging an issue. To remove certain arguments, add them to KDUMP_COMMANDLINE_REMOVE as follows: KDUMP_COMMANDLINE_APPEND This option appends arguments to the current command line. These arguments might have been parsed by the KDUMP_COMMANDLINE_REMOVE variable. For the kdump kernel, disabling certain modules such as mce , cgroup , numa , hest_disable can help prevent kernel errors. These modules can consume a significant part of the kernel memory reserved for kdump or cause kdump kernel boot failures. To disable memory cgroups on the kdump kernel command line, run the command as follows: Additional resources The Documentation/admin-guide/kernel-parameters.txt file The /etc/sysconfig/kdump file 4.7. Testing the kdump configuration After configuring kdump , you must manually test a system crash and ensure that the vmcore file is generated in the defined kdump target. The vmcore file is captured from the context of the freshly booted kernel. Therefore, vmcore has critical information for debugging a kernel crash. Warning Do not test kdump on active production systems. The commands to test kdump will cause the kernel to crash with loss of data. Depending on your system architecture, ensure that you schedule significant maintenance time because kdump testing might require several reboots with a long boot time. If the vmcore file is not generated during the kdump test, identify and fix issues before you run the test again for a successful kdump testing. If you make any manual system modifications, you must test the kdump configuration at the end of any system modification. For example, if you make any of the following changes, ensure that you test the kdump configuration for an optimal kdump performances for: Package upgrades. Hardware level changes, for example, storage or networking changes. Firmware upgrades. New installation and application upgrades that include third party modules. If you use the hot-plugging mechanism to add more memory on hardware that support this mechanism. After you make changes in the /etc/kdump.conf or /etc/sysconfig/kdump file. Prerequisites You have root permissions on the system. You have saved all important data. The commands to test kdump cause the kernel to crash with loss of data. You have scheduled significant machine maintenance time depending on the system architecture. Procedure Enable the kdump service: Check the status of the kdump service with the kdumpctl : Optionally, if you use the systemctl command, the output prints in the systemd journal. Start a kernel crash to test the kdump configuration. The sysrq-trigger key combination causes the kernel to crash and might reboot the system if required. On a kernel reboot, the address - YYYY-MM-DD - HH:MM:SS /vmcore file is created at the location you have specified in the /etc/kdump.conf file. The default is /var/crash/ . Additional resources Configuring the kdump target 4.8. Files produced by kdump after system crash After your system crashes, the kdump service captures the kernel memory in a dump file ( vmcore ) and it also generates additional diagnostic files to aid in troubleshooting and postmortem analysis. Files produced by kdump : vmcore - main kernel memory dump file containing system memory at the time of the crash. It includes data as per the configuration of the core_collector program specified in kdump configuration. By default the kernel data structures, process information, stack traces, and other diagnostic information. vmcore-dmesg.txt - contents of the kernel ring buffer log ( dmesg ) from the primary kernel that panicked. kexec-dmesg.log - has kernel and system log messages from the execution of the secondary kexec kernel that collects the vmcore data. Additional resources What is the kernel ring buffer 4.9. Enabling and disabling the kdump service You can configure to enable or disable the kdump functionality on a specific kernel or on all installed kernels. You must routinely test the kdump functionality and validate its operates correctly. Prerequisites You have root permissions on the system. You have completed kdump requirements for configurations and targets. See Supported kdump configurations and targets . All configurations for installing kdump are set up as required. Procedure Enable the kdump service for multi-user.target : Start the service in the current session: Stop the kdump service: Disable the kdump service: Warning It is recommended to set kptr_restrict=1 as default. When kptr_restrict is set to (1) as default, the kdumpctl service loads the crash kernel regardless of whether the Kernel Address Space Layout ( KASLR ) is enabled. If kptr_restrict is not set to 1 and KASLR is enabled, the contents of /proc/kore file are generated as all zeros. The kdumpctl service fails to access the /proc/kcore file and load the crash kernel. The kexec-kdump-howto.txt file displays a warning message, which recommends you to set kptr_restrict=1 . Verify for the following in the sysctl.conf file to ensure that kdumpctl service loads the crash kernel: Kernel kptr_restrict=1 in the sysctl.conf file. 4.10. Preventing kernel drivers from loading for kdump You can control the capture kernel from loading certain kernel drivers by adding the KDUMP_COMMANDLINE_APPEND= variable in the /etc/sysconfig/kdump configuration file. By using this method, you can prevent the kdump initial RAM disk image initramfs from loading the specified kernel module. This helps to prevent the out-of-memory (OOM) killer errors or other crash kernel failures. You can append the KDUMP_COMMANDLINE_APPEND= variable by using one of the following configuration options: rd.driver.blacklist= <modules> modprobe.blacklist= <modules> Prerequisites You have root permissions on the system. Procedure Display the list of modules that are loaded to the currently running kernel. Select the kernel module that you intend to block from loading: Update the KDUMP_COMMANDLINE_APPEND= variable in the /etc/sysconfig/kdump file. For example: Also, consider the following example by using the modprobe.blacklist= <modules> configuration option: Restart the kdump service: Additional resources dracut.cmdline man page on your system. 4.11. Running kdump on systems with encrypted disk When you run a LUKS encrypted partition, systems require certain amount of available memory. If the system has less than the required amount of available memory, the cryptsetup utility fails to mount the partition. As a result, capturing the vmcore file to an encrypted target location fails in the second kernel (capture kernel). The kdumpctl estimate command helps you estimate the amount of memory you need for kdump . kdumpctl estimate prints the recommended crashkernel value, which is the most suitable memory size required for kdump . The recommended crashkernel value is calculated based on the current kernel size, kernel module, initramfs, and the LUKS encrypted target memory requirement. If you are using the custom crashkernel= option, kdumpctl estimate prints the LUKS required size value. The value is the memory size required for LUKS encrypted target. Procedure Print the estimate crashkernel= value: Configure the amount of required memory by increasing the crashkernel= value. Reboot the system. Note If the kdump service still fails to save the dump file to the encrypted target, increase the crashkernel= value as required. | [
"makedumpfile --mem-usage /proc/kcore TYPE PAGES EXCLUDABLE DESCRIPTION ------------------------------------------------------------- ZERO 501635 yes Pages filled with zero CACHE 51657 yes Cache pages CACHE_PRIVATE 5442 yes Cache pages + private USER 16301 yes User process pages FREE 77738211 yes Free pages KERN_DATA 1333192 no Dumpable kernel data",
"crashkernel=1G-4G:192M,4G-64G:256M,64G:512M",
"kdumpctl reset-crashkernel --kernel=ALL",
"crashkernel=192M",
"crashkernel=1G-4G:192M,2G-64G:256M",
"crashkernel=192M@16M",
"grubby --update-kernel ALL --args \"crashkernel= <custom-value> \"",
"reboot",
"echo c > /proc/sysrq-trigger",
"path /var/crash",
"kdump_post <path_to_kdump_post.sh>",
"*grep -v ^# /etc/kdump.conf | grep -v ^USD* ext4 /dev/mapper/vg00-varcrashvol path /var/crash core_collector makedumpfile -c --message-level 1 -d 31",
"path /usr/local/cores",
"ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937",
"raw /dev/sdb1",
"nfs penguin.example.com:/export/cores",
"sudo systemctl restart kdump.service",
"ssh [email protected] sshkey /root/.ssh/mykey",
"core_collector makedumpfile -l --message-level 1 -d 31",
"core_collector makedumpfile -l --message-level 1 -d 31",
"core_collector makedumpfile -c -d 31 --message-level 1",
"core_collector makedumpfile -p -d 31 --message-level 1",
"failure_action poweroff",
"KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\"",
"KDUMP_COMMANDLINE_APPEND=\"cgroup_disable=memory\"",
"kdumpctl restart",
"kdumpctl status kdump:Kdump is operational",
"echo c > /proc/sysrq-trigger",
"systemctl enable kdump.service",
"systemctl start kdump.service",
"systemctl stop kdump.service",
"systemctl disable kdump.service",
"lsmod Module Size Used by fuse 126976 3 xt_CHECKSUM 16384 1 ipt_MASQUERADE 16384 1 uinput 20480 1 xt_conntrack 16384 1",
"KDUMP_COMMANDLINE_APPEND=\"rd.driver.blacklist= hv_vmbus,hv_storvsc,hv_utils,hv_netvsc,hid-hyperv \"",
"KDUMP_COMMANDLINE_APPEND=\"modprobe.blacklist= emcp modprobe.blacklist= bnx2fc modprobe.blacklist= libfcoe modprobe.blacklist= fcoe \"",
"systemctl restart kdump",
"*kdumpctl estimate* Encrypted kdump target requires extra memory, assuming using the keyslot with minimum memory requirement Reserved crashkernel: 256M Recommended crashkernel: 652M Kernel image size: 47M Kernel modules size: 8M Initramfs size: 20M Runtime reservation: 64M LUKS required size: 512M Large modules: <none> WARNING: Current crashkernel size is lower than recommended size 652M."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/installing_rhel_9_for_real_time/configuring-kdump-on-the-command-line_installing-rhel-9-for-real-time |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_z/providing-feedback-on-red-hat-documentation_ibmz |
Chapter 1. Installing the CLI | Chapter 1. Installing the CLI The Skupper CLI provides a method to create both Kubernetes and Podman sites. There are two methods to install the CLI: Section 1.1, "Downloading binaries" Section 1.2, "Using Red Hat packages" 1.1. Downloading binaries Downloading the Skupper CLI binaries is a quick way to get started with Red Hat Service Interconnect. However, consider using Red Hat packages on Linux to receive the latest updates. Procedure Download binary files for Linux, macOS or Windows, choose the latest Version for 1.8 at Software Downloads . For a Mac with Apple silicon, use Rosetta 2 and the Skupper CLI for Mac on x86-64 download. Unzip the downloaded file and place the Skupper executable on your PATH. Verify installation: USD skupper version client version 1.8.2-rh-1 1.2. Using Red Hat packages Installing Red Hat packages on Linux makes sure you receive the latest updates to the Skupper CLI. Prerequisites Make sure your subscription is activated and your system is registered. For more information about using the Customer Portal to activate your Red Hat subscription and register your system for packages, see Chapter 6, Using your subscription . Procedure Use the subscription-manager command to subscribe to the required package repositories. Replace <version> with 1 for the main release stream or 1.4 for the long term support release stream. Note Replacing <version> with 1 installs 1.8 , while 1.8 is the main release stream and changes after further releases. Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Use the yum or dnf command to install the skupper command: Additional information See Examples for the 'Hello world' tutorial. Use man containers.conf to view more information about podman configuration. | [
"skupper version client version 1.8.2-rh-1",
"sudo subscription-manager repos --enable=service-interconnect-_<version>_-for-rhel-8-x86_64-rpms",
"sudo subscription-manager repos --enable=service-interconnect-_<version>_-for-rhel-9-x86_64-rpms",
"sudo dnf install skupper-cli"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/installation/installing-skupper-cli |
Chapter 8. Integrating with OpenShift | Chapter 8. Integrating with OpenShift Section 8.2, "Automatic OpenShift token injection" Section 8.3, "Navigating Dev Spaces from OpenShift Developer Perspective" Section 8.4, "Navigating OpenShift web console from Dev Spaces" 8.1. Managing workspaces with OpenShift APIs On your organization's OpenShift cluster, OpenShift Dev Spaces workspaces are represented as DevWorkspace custom resources of the same name. As a result, if there is a workspace named my-workspace in the OpenShift Dev Spaces dashboard, there is a corresponding DevWorkspace custom resource named my-workspace in the user's project on the cluster. Because each DevWorkspace custom resource on the cluster represents a OpenShift Dev Spaces workspace, you can manage OpenShift Dev Spaces workspaces by using OpenShift APIs with clients such as the command-line oc . Each DevWorkspace custom resource contains details derived from the devfile of the Git repository cloned for the workspace. For example, a devfile might provide devfile commands and workspace container configurations. 8.1.1. Listing all workspaces As a user, you can list your workspaces by using the command line. Prerequisites An active oc session with permissions to get the DevWorkspace resources in your project on the cluster. See Getting started with the CLI . You know the relevant OpenShift Dev Spaces user namespace on the cluster. Tip You can visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . You are in the OpenShift Dev Spaces user namespace on the cluster. Tip On OpenShift, you can use the command-line oc tool to display your current namespace or switch to a namespace . Procedure To list your workspaces, enter the following on a command line: Example 8.1. Output Tip You can view PHASE changes live by adding the --watch flag to this command. Note Users with administrative permissions on the cluster can list all workspaces from all OpenShift Dev Spaces users by including the --all-namespaces flag. 8.1.2. Creating workspaces If your use case does not permit use of the OpenShift Dev Spaces dashboard, you can create workspaces with OpenShift APIs by applying custom resources to the cluster. Note Creating workspaces through the OpenShift Dev Spaces dashboard provides better user experience and configuration benefits compared to using the command line: As a user, you are automatically logged in to the cluster. OpenShift clients work automatically. OpenShift Dev Spaces and its components automatically convert the target Git repository's devfile into the DevWorkspace and DevWorkspaceTemplate custom resources on the cluster. Access to the workspace is secured by default with the routingClass: che in the DevWorkspace of the workspace. Recognition of the DevWorkspaceOperatorConfig configuration is managed by OpenShift Dev Spaces. Recognition of configurations in spec.devEnvironments specified in the CheCluster custom resource including: Persistent storage strategy is specified with devEnvironments.storage . Default IDE is specified with devEnvironments.defaultEditor . Default plugins are specified with devEnvironments.defaultPlugins . Container build configuration is specified with devEnvironments.containerBuildConfiguration . Prerequisites An active oc session with permissions to create DevWorkspace resources in your project on the cluster. See Getting started with the CLI . You know the relevant OpenShift Dev Spaces user namespace on the cluster. Tip You can visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . You are in the OpenShift Dev Spaces user namespace on the cluster. Tip On OpenShift, you can use the command-line oc tool to display your current namespace or switch to a namespace . Note OpenShift Dev Spaces administrators who intend to create workspaces for other users must create the DevWorkspace custom resource in a user namespace that is provisioned by OpenShift Dev Spaces or by the administrator. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/administration_guide/index#administration-guide:configuring-namespace-provisioning . Procedure To prepare the DevWorkspace custom resource, copy the contents of the target Git repository's devfile. Example 8.2. Copied devfile contents with schemaVersion: 2.2.0 components: - name: tooling-container container: image: quay.io/devfile/universal-developer-image:ubi8-latest Tip For more details, see the devfile v2 documentation . Create a DevWorkspace custom resource, pasting the devfile contents from the step under the spec.template field. Example 8.3. A DevWorkspace custom resource kind: DevWorkspace apiVersion: workspace.devfile.io/v1alpha2 metadata: name: my-devworkspace 1 namespace: user1-dev 2 spec: routingClass: che started: true 3 contributions: 4 - name: ide uri: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors/devfile?che-editor=che-incubator/che-code/latest template: projects: 5 - name: my-project-name git: remotes: origin: https://github.com/eclipse-che/che-docs components: 6 - name: tooling-container container: image: quay.io/devfile/universal-developer-image:ubi8-latest 1 Name of the DevWorkspace custom resource. This will be the name of the new workspace. 2 User namespace, which is the target project for the new workspace. 3 Determines whether the workspace must be started when the DevWorkspace custom resource is created. 4 URL reference to the Microsoft Visual Studio Code - Open Source IDE devfile. 5 Details about the Git repository to clone into the workspace when it starts. 6 List of components such as workspace containers and volume components. Apply the DevWorkspace custom resource to the cluster. Verification Verify that the workspace is starting by checking the PHASE status of the DevWorkspace . Example 8.4. Output When the workspace has successfully started, its PHASE status changes to Running in the output of the oc get devworkspaces command. Example 8.5. Output You can then open the workspace by using one of these options: Visit the URL provided in the INFO section of the output of the oc get devworkspaces command. Open the workspace from the OpenShift Dev Spaces dashboard. 8.1.3. Stopping workspaces You can stop a workspace by setting the spec.started field in the Devworkspace custom resource to false . Prerequisites An active oc session on the cluster. See Getting started with the CLI . You know the workspace name. Tip You can find the relevant workspace name in the output of USD oc get devworkspaces . You know the relevant OpenShift Dev Spaces user namespace on the cluster. Tip You can visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . You are in the OpenShift Dev Spaces user namespace on the cluster. Tip On OpenShift, you can use the command-line oc tool to display your current namespace or switch to a namespace . Procedure Run the following command to stop a workspace: 8.1.4. Starting stopped workspaces You can start a stopped workspace by setting the spec.started field in the Devworkspace custom resource to true . Prerequisites An active oc session on the cluster. See Getting started with the CLI . You know the workspace name. Tip You can find the relevant workspace name in the output of USD oc get devworkspaces . You know the relevant OpenShift Dev Spaces user namespace on the cluster. Tip You can visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . You are in the OpenShift Dev Spaces user namespace on the cluster. Tip On OpenShift, you can use the command-line oc tool to display your current namespace or switch to a namespace . Procedure Run the following command to start a stopped workspace: 8.1.5. Removing workspaces You can remove a workspace by simply deleting the DevWorkspace custom resource. Warning Deleting the DevWorkspace custom resource will also delete other workspace resources if they were created by OpenShift Dev Spaces: for example, the referenced DevWorkspaceTemplate and per-workspace PersistentVolumeClaims . Tip Remove workspaces by using the OpenShift Dev Spaces dashboard whenever possible. Prerequisites An active oc session on the cluster. See Getting started with the CLI . You know the workspace name. Tip You can find the relevant workspace name in the output of USD oc get devworkspaces . You know the relevant OpenShift Dev Spaces user namespace on the cluster. Tip You can visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . You are in the OpenShift Dev Spaces user namespace on the cluster. Tip On OpenShift, you can use the command-line oc tool to display your current namespace or switch to a namespace . Procedure Run the following command to remove a workspace: 8.2. Automatic OpenShift token injection This section describes how to use the OpenShift user token that is automatically injected into workspace containers which allows running OpenShift Dev Spaces CLI commands against OpenShift cluster. Procedure Open the OpenShift Dev Spaces dashboard and start a workspace. Once the workspace is started, open a terminal in the container that contains the OpenShift Dev Spaces CLI. Execute OpenShift Dev Spaces CLI commands which allow you to run commands against OpenShift cluster. CLI can be used for deploying applications, inspecting and managing cluster resources, and viewing logs. OpenShift user token will be used during the execution of the commands. Warning The automatic token injection currently works only on the OpenShift infrastructure. 8.3. Navigating Dev Spaces from OpenShift Developer Perspective The OpenShift Container Platform web console provides two perspectives; the Administrator perspective and the Developer perspective. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on the OpenShift Container Platform by importing existing codebases, images, and Dockerfiles. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using OpenShift Dev Spaces. 8.3.1. OpenShift Developer Perspective integration with OpenShift Dev Spaces This section provides information about OpenShift Developer Perspective support for OpenShift Dev Spaces. When the OpenShift Dev Spaces Operator is deployed into OpenShift Container Platform 4.2 and later, it creates a ConsoleLink Custom Resource (CR). This adds an interactive link to the Red Hat Applications menu for accessing the OpenShift Dev Spaces installation using the OpenShift Developer Perspective console. To access the Red Hat Applications menu, click the three-by-three matrix icon on the main screen of the OpenShift web console. The OpenShift Dev Spaces Console Link , displayed in the drop-down menu, creates a new workspace or redirects the user to an existing one. Note OpenShift Container Platform console links are not created when OpenShift Dev Spaces is used with HTTP resources When installing OpenShift Dev Spaces with the From Git option, the OpenShift Developer Perspective console link is only created if OpenShift Dev Spaces is deployed with HTTPS. The console link will not be created if an HTTP resource is used. 8.3.2. Editing the code of applications running in OpenShift Container Platform using OpenShift Dev Spaces This section describes how to start editing the source code of applications running on OpenShift using OpenShift Dev Spaces. Prerequisites OpenShift Dev Spaces is deployed on the same OpenShift 4 cluster. Procedure Open the Topology view to list all projects. In the Select an Application search field, type workspace to list all workspaces. Click the workspace to edit. The deployments are displayed as graphical circles surrounded by circular buttons. One of these buttons is Edit Source Code . To edit the code of an application using OpenShift Dev Spaces, click the Edit Source Code button. This redirects to a workspace with the cloned source code of the application component. 8.3.3. Accessing OpenShift Dev Spaces from Red Hat Applications menu This section describes how to access OpenShift Dev Spaces workspaces from the Red Hat Applications menu on the OpenShift Container Platform. Prerequisites The OpenShift Dev Spaces Operator is available in OpenShift 4. Procedure Open the Red Hat Applications menu by using the three-by-three matrix icon in the upper right corner of the main screen. The drop-down menu displays the available applications. Click the OpenShift Dev Spaces link to open the Dev Spaces Dashboard. 8.4. Navigating OpenShift web console from Dev Spaces This section describes how to access OpenShift web console from OpenShift Dev Spaces. Prerequisites The OpenShift Dev Spaces Operator is available in OpenShift 4. Procedure Open the OpenShift Dev Spaces dashboard and click the three-by-three matrix icon in the upper right corner of the main screen. The drop-down menu displays the available applications. Click the OpenShift console link to open the OpenShift web console. | [
"oc get devworkspaces",
"NAMESPACE NAME DEVWORKSPACE ID PHASE INFO user1-dev spring-petclinic workspace6d99e9ffb9784491 Running https://url-to-workspace.com user1-dev golang-example workspacedf64e4a492cd4701 Stopped Stopped user1-dev python-hello-world workspace69c26884bbc141f2 Failed Container tooling has state CrashLoopBackOff",
"components: - name: tooling-container container: image: quay.io/devfile/universal-developer-image:ubi8-latest",
"kind: DevWorkspace apiVersion: workspace.devfile.io/v1alpha2 metadata: name: my-devworkspace 1 namespace: user1-dev 2 spec: routingClass: che started: true 3 contributions: 4 - name: ide uri: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors/devfile?che-editor=che-incubator/che-code/latest template: projects: 5 - name: my-project-name git: remotes: origin: https://github.com/eclipse-che/che-docs components: 6 - name: tooling-container container: image: quay.io/devfile/universal-developer-image:ubi8-latest",
"oc get devworkspaces -n <user_project> --watch",
"NAMESPACE NAME DEVWORKSPACE ID PHASE INFO user1-dev my-devworkspace workspacedf64e4a492cd4701 Starting Waiting for workspace deployment",
"NAMESPACE NAME DEVWORKSPACE ID PHASE INFO user1-dev my-devworkspace workspacedf64e4a492cd4701 Running https://url-to-workspace.com",
"oc patch devworkspace <workspace_name> -p '{\"spec\":{\"started\":false}}' --type=merge -n <user_namespace> && wait --for=jsonpath='{.status.phase}'=Stopped dw/ <workspace_name> -n <user_namespace>",
"oc patch devworkspace <workspace_name> -p '{\"spec\":{\"started\":true}}' --type=merge -n <user_namespace> && wait --for=jsonpath='{.status.phase}'=Running dw/ <workspace_name> -n <user_namespace>",
"oc delete devworkspace <workspace_name> -n <user_namespace>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/user_guide/integrating-with-kubernetes |
6.16. Other Virtual Machine Tasks | 6.16. Other Virtual Machine Tasks 6.16.1. Enabling SAP Monitoring Enable SAP monitoring on a virtual machine through the Administration Portal. Enabling SAP Monitoring on Virtual Machines Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Custom Properties tab. Select sap_agent from the drop-down list. Ensure the secondary drop-down menu is set to True . If properties have been set, select the plus sign to add a new property rule and select sap_agent . Click OK . 6.16.2. Configuring Red Hat Enterprise Linux 5.4 and later Virtual Machines to use SPICE SPICE is a remote display protocol designed for virtual environments, which enables you to view a virtualized desktop or server. SPICE delivers a high quality user experience, keeps CPU consumption low, and supports high quality video streaming. Using SPICE on a Linux machine significantly improves the movement of the mouse cursor on the console of the virtual machine. To use SPICE, the X-Windows system requires additional QXL drivers. The QXL drivers are provided with Red Hat Enterprise Linux 5.4 and later. Earlier versions are not supported. Installing SPICE on a virtual machine running Red Hat Enterprise Linux significantly improves the performance of the graphical user interface. Note Typically, this is most useful for virtual machines where the user requires the use of the graphical user interface. System administrators who are creating virtual servers may prefer not to configure SPICE if their use of the graphical user interface is minimal. 6.16.2.1. Installing and Configuring QXL Drivers You must manually install QXL drivers on virtual machines running Red Hat Enterprise Linux 5.4 or later. This is unnecessary for virtual machines running Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7 as the QXL drivers are installed by default. Installing QXL Drivers Log in to a Red Hat Enterprise Linux virtual machine. Install the QXL drivers: # yum install xorg-x11-drv-qxl You can configure QXL drivers using either a graphical interface or the command line. Perform only one of the following procedures. Configuring QXL drivers in GNOME Click System . Click Administration . Click Display . Click the Hardware tab. Click Video Cards Configure . Select qxl and click OK . Restart X-Windows by logging out of the virtual machine and logging back in. Configuring QXL drivers on the command line Back up /etc/X11/xorg.conf : # cp /etc/X11/xorg.conf /etc/X11/xorg.conf.USDUSD.backup Make the following change to the Device section of /etc/X11/xorg.conf : Section "Device" Identifier "Videocard0" Driver "qxl" Endsection 6.16.2.2. Configuring a Virtual Machine's Tablet and Mouse to use SPICE Edit the /etc/X11/xorg.conf file to enable SPICE for your virtual machine's tablet devices. Configuring a Virtual Machine's Tablet and Mouse to use SPICE Verify that the tablet device is available on your guest: # /sbin/lsusb -v | grep 'QEMU USB Tablet' Back up /etc/X11/xorg.conf : # cp /etc/X11/xorg.conf /etc/X11/xorg.conf.USDUSD.backup Make the following changes to /etc/X11/xorg.conf : Section "ServerLayout" Identifier "single head configuration" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Tablet" "SendCoreEvents" InputDevice "Mouse" "CorePointer" EndSection Section "InputDevice" Identifier "Mouse" Driver "void" #Option "Device" "/dev/input/mice" #Option "Emulate3Buttons" "yes" EndSection Section "InputDevice" Identifier "Tablet" Driver "evdev" Option "Device" "/dev/input/event2" Option "CorePointer" "true" EndSection Log out and log back into the virtual machine to restart X-Windows. 6.16.3. KVM Virtual Machine Timing Management Virtualization poses various challenges for virtual machine time keeping. Virtual machines which use the Time Stamp Counter (TSC) as a clock source may suffer timing issues as some CPUs do not have a constant Time Stamp Counter. Virtual machines running without accurate timekeeping can have serious affects on some networked applications as your virtual machine will run faster or slower than the actual time. KVM works around this issue by providing virtual machines with a paravirtualized clock. The KVM pvclock provides a stable source of timing for KVM guests that support it. Presently, only Red Hat Enterprise Linux 5.4 and later virtual machines fully support the paravirtualized clock. Virtual machines can have several problems caused by inaccurate clocks and counters: Clocks can fall out of synchronization with the actual time which invalidates sessions and affects networks. Virtual machines with slower clocks may have issues migrating. These problems exist on other virtualization platforms and timing should always be tested. Important The Network Time Protocol (NTP) daemon should be running on the host and the virtual machines. Enable the ntpd service and add it to the default startup sequence: For Red Hat Enterprise Linux 6 For Red Hat Enterprise Linux 7 Using the ntpd service should minimize the affects of clock skew in all cases. The NTP servers you are trying to use must be operational and accessible to your hosts and virtual machines. Determining if your CPU has the constant Time Stamp Counter Your CPU has a constant Time Stamp Counter if the constant_tsc flag is present. To determine if your CPU has the constant_tsc flag run the following command: USD cat /proc/cpuinfo | grep constant_tsc If any output is given your CPU has the constant_tsc bit. If no output is given follow the instructions below. Configuring hosts without a constant Time Stamp Counter Systems without constant time stamp counters require additional configuration. Power management features interfere with accurate time keeping and must be disabled for virtual machines to accurately keep time with KVM. Important These instructions are for AMD revision F CPUs only. If the CPU lacks the constant_tsc bit, disable all power management features ( BZ#513138 ). Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append "processor.max_cstate=1" to the kernel boot options in the grub.conf file on the host: term Red Hat Enterprise Linux Server (2.6.18-159.el5) root (hd0,0) kernel /vmlinuz-2.6.18-159.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet processor.max_cstate=1 Disable cpufreq (only necessary on hosts without the constant_tsc ) by editing the /etc/sysconfig/cpuspeed configuration file and change the MIN_SPEED and MAX_SPEED variables to the highest frequency available. Valid limits can be found in the /sys/devices/system/cpu/cpu/cpufreq/scaling_available_frequencies files. Using the engine-config tool to receive alerts when hosts drift out of sync. You can use the engine-config tool to configure alerts when your hosts drift out of sync. There are 2 relevant parameters for time drift on hosts: EnableHostTimeDrift and HostTimeDriftInSec . EnableHostTimeDrift , with a default value of false, can be enabled to receive alert notifications of host time drift. The HostTimeDriftInSec parameter is used to set the maximum allowable drift before alerts start being sent. Alerts are sent once per hour per host. Using the paravirtualized clock with Red Hat Enterprise Linux virtual machines For certain Red Hat Enterprise Linux virtual machines, additional kernel parameters are required. These parameters can be set by appending them to the end of the /kernel line in the /boot/grub/grub.conf file of the virtual machine. Note The process of configuring kernel parameters can be automated using the ktune package The ktune package provides an interactive Bourne shell script, fix_clock_drift.sh . When run as the superuser, this script inspects various system parameters to determine if the virtual machine on which it is run is susceptible to clock drift under load. If so, it then creates a new grub.conf.kvm file in the /boot/grub/ directory. This file contains a kernel boot line with additional kernel parameters that allow the kernel to account for and prevent significant clock drift on the KVM virtual machine. After running fix_clock_drift.sh as the superuser, and once the script has created the grub.conf.kvm file, then the virtual machine's current grub.conf file should be backed up manually by the system administrator, the new grub.conf.kvm file should be manually inspected to ensure that it is identical to grub.conf with the exception of the additional boot line parameters, the grub.conf.kvm file should finally be renamed grub.conf , and the virtual machine should be rebooted. The table below lists versions of Red Hat Enterprise Linux and the parameters required for virtual machines on systems without a constant Time Stamp Counter. Red Hat Enterprise Linux Additional virtual machine kernel parameters 5.4 AMD64/Intel 64 with the paravirtualized clock Additional parameters are not required 5.4 AMD64/Intel 64 without the paravirtualized clock notsc lpj=n 5.4 x86 with the paravirtualized clock Additional parameters are not required 5.4 x86 without the paravirtualized clock clocksource=acpi_pm lpj=n 5.3 AMD64/Intel 64 notsc 5.3 x86 clocksource=acpi_pm 4.8 AMD64/Intel 64 notsc 4.8 x86 clock=pmtmr 3.9 AMD64/Intel 64 Additional parameters are not required 3.9 x86 Additional parameters are not required 6.16.4. Adding a Trusted Platform Module device Trusted Platform Module (TPM) devices provide a secure crypto-processor designed to carry out cryptographic operations such as generating cryptographic keys, random numbers, and hashes, or for storing data that can be used to verify software configurations securely. TPM devices are commonly used for disk encryption. QEMU and libvirt implement support for emulated TPM 2.0 devices, which is what Red Hat Virtualization uses to add TPM devices to Virtual Machines. Once an emulated TPM device is added to the virtual machine, it can be used as a normal TPM 2.0 device in the guest OS. Important If there is TPM data stored for the virtual machine and the TPM device is disabled in the virtual machine, the TPM data is permanently removed. Enabling a TPM device In the Add Virtual Machine or Edit Virtual Machine screen, click Show Advanced Options . In the Resource Allocation tab, select the TPM Device Enabled check box. Limitations The following limitations apply: TPM devices can only be used on x86_64 machines with UEFI firmware and PowerPC machines with pSeries firmware installed. Virtual machines with TPM devices can not have snapshots with memory. While the Manager retrieves and stores TPM data periodically, there is no guarantee that the Manager will always have the latest version of the TPM data. Note This process can take 120 seconds or more, and you must wait for the process to complete before you can take snapshot of a running virtual machine, clone a running virtual machine, or migrate a running virtual machine. TPM devices can only be enabled for virtual machines running RHEL 7 or later and Windows 8.1 or later. Virtual machines and templates with TPM data can not be exported or imported. | [
"yum install xorg-x11-drv-qxl",
"cp /etc/X11/xorg.conf /etc/X11/xorg.conf.USDUSD.backup",
"Section \"Device\" Identifier \"Videocard0\" Driver \"qxl\" Endsection",
"/sbin/lsusb -v | grep 'QEMU USB Tablet'",
"If there is no output from the command, do not continue configuring the tablet.",
"cp /etc/X11/xorg.conf /etc/X11/xorg.conf.USDUSD.backup",
"Section \"ServerLayout\" Identifier \"single head configuration\" Screen 0 \"Screen0\" 0 0 InputDevice \"Keyboard0\" \"CoreKeyboard\" InputDevice \"Tablet\" \"SendCoreEvents\" InputDevice \"Mouse\" \"CorePointer\" EndSection Section \"InputDevice\" Identifier \"Mouse\" Driver \"void\" #Option \"Device\" \"/dev/input/mice\" #Option \"Emulate3Buttons\" \"yes\" EndSection Section \"InputDevice\" Identifier \"Tablet\" Driver \"evdev\" Option \"Device\" \"/dev/input/event2\" Option \"CorePointer\" \"true\" EndSection",
"service ntpd start chkconfig ntpd on",
"systemctl start ntpd.service systemctl enable ntpd.service",
"cat /proc/cpuinfo | grep constant_tsc",
"term Red Hat Enterprise Linux Server (2.6.18-159.el5) root (hd0,0) kernel /vmlinuz-2.6.18-159.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet processor.max_cstate=1"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-other_virtual_machine_tasks |
Chapter 11. Installing a cluster on Azure using ARM templates | Chapter 11. Installing a cluster on Azure using ARM templates In OpenShift Container Platform version 4.12, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was last tested using version 2.38.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 11.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.3. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 11.3.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage 11.3.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 11.3.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 11.3.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 11.3.5. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, review the following information: Your Azure account subscription must have the following roles: User Access Administrator Contributor Your Azure Active Directory (AD) must have the following permission: "microsoft.directory/servicePrincipals/createAsOwner" To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 11.3.6. Required Azure permissions for user-provisioned infrastructure When you assign Contributor and User Access Administrator roles to the service principal, you automatically grant all the required permissions. If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 11.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 11.2. Required permissions for creating compute resources Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/deallocate/action Example 11.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 11.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Example 11.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 11.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 11.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 11.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 11.9. Required permissions for creating deployments Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write Microsoft.Resources/deployments/validate/action Microsoft.Resources/deployments/operationstatuses/read Example 11.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/write Example 11.11. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 11.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. Example 11.13. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 11.14. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/images/delete Example 11.15. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 11.16. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Example 11.17. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 11.18. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 11.19. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions related to resource group creation to your subscription. After the resource group is created, you can scope the rest of the permissions to the created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 11.3.7. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. If you want to use a custom role, you have created a custom role with the required permissions listed in the Required Azure permissions for user-provisioned infrastructure section. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 11.3.8. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 11.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 11.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 11.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 11.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.4.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 11.20. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 11.5. Selecting an Azure Marketplace image If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.8. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. If you use the Azure Resource Manager (ARM) template to deploy your worker nodes: Update storageProfile.imageReference by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. Specify a plan for the virtual machines (VMs). Example 06_workers.json ARM template with an updated storageProfile.imageReference object and a specified plan ... "plan" : { "name": "rh-ocp-worker", "product": "rh-ocp-worker", "publisher": "redhat" }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { ... "storageProfile": { "imageReference": { "offer": "rh-ocp-worker", "publisher": "redhat", "sku": "rh-ocp-worker", "version": "4.8.2021122100" } ... } ... } 11.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 11.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 11.8. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 11.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 11.8.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting install-config.yaml file to set replicas to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 11.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.8.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 11.8.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 11.9. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" Note If you want to assign a custom role with all the required permissions to the identity, run the following command: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role <custom_role> \ 1 --scope "USD{RESOURCE_GROUP_ID}" 1 Specifies the custom role name. 11.10. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".url'` Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the local VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 11.11. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 11.12. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 11.12.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 11.21. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 11.13. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.13.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 11.22. 02_storage.json ARM template { "USDschema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "architecture": { "type": "string", "metadata": { "description": "The architecture of the Virtual Machines" }, "defaultValue": "x64", "allowedValues": [ "Arm64", "x64" ] }, "baseName": { "type": "string", "minLength": 1, "metadata": { "description": "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "storageAccount": { "type": "string", "metadata": { "description": "The Storage Account name" } }, "vhdBlobURL": { "type": "string", "metadata": { "description": "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables": { "location": "[resourceGroup().location]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName": "[parameters('baseName')]", "imageNameGen2": "[concat(parameters('baseName'), '-gen2')]", "imageRelease": "1.0.0" }, "resources": [ { "apiVersion": "2021-10-01", "type": "Microsoft.Compute/galleries", "name": "[variables('galleryName')]", "location": "[variables('location')]", "resources": [ { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageName')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V1", "identifier": { "offer": "rhcos", "publisher": "RedHat", "sku": "basic" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageName')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] }, { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageNameGen2')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V2", "identifier": { "offer": "rhcos-gen2", "publisher": "RedHat-gen2", "sku": "gen2" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageNameGen2')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] } ] } ] } 11.14. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 11.14.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 11.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 11.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 11.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 11.15. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 11.15.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 11.23. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "public-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 11.16. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.16.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 11.24. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "clusterNsgName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 11.17. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's Azure Resource Manager (ARM) template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the control plane nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.17.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 11.25. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "privateDNSZoneName" : { "type" : "string", "defaultValue" : "", "metadata" : { "description" : "unused" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 11.18. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 11.19. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's ARM template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.19.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 11.26. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the each Node Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 11.20. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 11.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 11.23. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 11.24. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 11.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"4.8.2021122100\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"public-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure/installing-azure-user-infra |
Chapter 8. Configuring logging | Chapter 8. Configuring logging Most services in Red Hat Enterprise Linux log status messages, warnings, and errors. You can use the rsyslogd service to log these entries to local files or to a remote logging server. 8.1. Configuring a remote logging solution To ensure that logs from various machines in your environment are recorded centrally on a logging server, you can configure the Rsyslog application to record logs that fit specific criteria from the client system to the server. 8.1.1. The Rsyslog logging service The Rsyslog application, in combination with the systemd-journald service, provides local and remote logging support in Red Hat Enterprise Linux. The rsyslogd daemon continuously reads syslog messages received by the systemd-journald service from the Journal. rsyslogd then filters and processes these syslog events and records them to rsyslog log files or forwards them to other services according to its configuration. The rsyslogd daemon also provides extended filtering, encryption protected relaying of messages, input and output modules, and support for transportation using the TCP and UDP protocols. In /etc/rsyslog.conf , which is the main configuration file for rsyslog , you can specify the rules according to which rsyslogd handles the messages. Generally, you can classify messages by their source and topic (facility) and urgency (priority), and then assign an action that should be performed when a message fits these criteria. In /etc/rsyslog.conf , you can also see a list of log files maintained by rsyslogd . Most log files are located in the /var/log/ directory. Some applications, such as httpd and samba , store their log files in a subdirectory within /var/log/ . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 8.1.2. Installing Rsyslog documentation The Rsyslog application has extensive online documentation that is available at https://www.rsyslog.com/doc/ , but you can also install the rsyslog-doc documentation package locally. Prerequisites You have activated the AppStream repository on your system. You are authorized to install new packages using sudo . Procedure Install the rsyslog-doc package: Verification Open the /usr/share/doc/rsyslog/html/index.html file in a browser of your choice, for example: 8.1.3. Configuring a server for remote logging over TCP The Rsyslog application enables you to both run a logging server and configure individual systems to send their log files to the logging server. To use remote logging through TCP, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. With the Rsyslog application, you can maintain a centralized logging system where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, you can configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues cannot be configured for connections using the UDP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, it does not have to be loaded. By default, rsyslog uses TCP on port 514 . Prerequisites Rsyslog is installed on the server system. You are logged in as root on the server. The policycoreutils-python-utils package is installed for the optional step using the semanage command. The firewalld service is running. Procedure Optional: To use a different port for rsyslog traffic, add the syslogd_port_t SELinux type to port. For example, enable port 30514 : Optional: To use a different port for rsyslog traffic, configure firewalld to allow incoming rsyslog traffic on that port. For example, allow TCP traffic on port 30514 : Create a new file in the /etc/rsyslog.d/ directory named, for example, remotelog.conf , and insert the following content: # Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides TCP syslog reception module(load="imtcp") # Adding this ruleset to process remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imtcp" port="30514" ruleset="remote1") Save the changes to the /etc/rsyslog.d/remotelog.conf file. Test the syntax of the /etc/rsyslog.conf file: Make sure the rsyslog service is running and enabled on the logging server: Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Your log server is now configured to receive and store log files from the other systems in your environment. Additional resources rsyslogd(8) , rsyslog.conf(5) , semanage(8) , and firewall-cmd(1) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 8.1.4. Configuring remote logging to a server over TCP You can configure a system for forwarding log messages to a server over the TCP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it. Prerequisites The rsyslog package is installed on the client systems that should report to the server. You have configured the server for remote logging. The specified port is permitted in SELinux and open in firewall. The system contains the policycoreutils-python-utils package, which provides the semanage command for adding a non-standard port to the SELinux configuration. Procedure Create a new file in the /etc/rsyslog.d/ directory named, for example, 10-remotelog.conf , and insert the following content: Where: The queue.type="linkedlist" setting enables a LinkedList in-memory queue, The queue.filename setting defines a disk storage. The backup files are created with the example_fwd prefix in the working directory specified by the preceding global workDirectory directive. The action.resumeRetryCount -1 setting prevents rsyslog from dropping messages when retrying to connect if server is not responding, The queue.saveOnShutdown="on" setting saves in-memory data if rsyslog shuts down. The last line forwards all received messages to the logging server. Port specification is optional. With this configuration, rsyslog sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only if rsyslog runs out of the configured memory queue space or needs to shut down, which benefits the system performance. Note Rsyslog processes configuration files /etc/rsyslog.d/ in the lexical order. Restart the rsyslog service. Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the /var/log/messages log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 8.1.5. Configuring TLS-encrypted remote logging By default, Rsyslog sends remote-logging communication in the plain text format. If your scenario requires to secure this communication channel, you can encrypt it using TLS. To use encrypted transport through TLS, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. You can use either the ossl network stream driver (OpenSSL) or the gtls stream driver (GnuTLS). Note If you have a separate system with higher security, for example, a system that is not connected to any network or has stricter authorizations, use the separate system as the certifying authority (CA). You can customize your connection settings with stream drivers on the server side on the global , module , and input levels, and on the client side on the global and action levels. The more specific configuration overrides the more general configuration. This means, for example, that you can use ossl in global settings for most connections and gtls on the input and action settings only for specific connections. Prerequisites You have root access to both the client and server systems. The following packages are installed on the server and the client systems: The rsyslog package. For the ossl network stream driver, the rsyslog-openssl package. For the gtls network stream driver, the rsyslog-gnutls package. For generating certificates by using the certtool command, the gnutls-utils package. On your logging server, the following certificates are in the /etc/pki/ca-trust/source/anchors/ directory and your system configuration is updated by using the update-ca-trust command: ca-cert.pem - a CA certificate that can verify keys and certificates on logging servers and clients. server-cert.pem - a public key of the logging server. server-key.pem - a private key of the logging server. On your logging clients, the following certificates are in the /etc/pki/ca-trust/source/anchors/ directory and your system configuration is updated by using update-ca-trust : ca-cert.pem - a CA certificate that can verify keys and certificates on logging servers and clients. client-cert.pem - a public key of a client. client-key.pem - a private key of a client. Procedure Configure the server for receiving encrypted logs from your client systems: Create a new file in the /etc/rsyslog.d/ directory named, for example, securelogser.conf . To encrypt the communication, the configuration file must contain paths to certificate files on your server, a selected authentication method, and a stream driver that supports TLS encryption. Add the following lines to the /etc/rsyslog.d/securelogser.conf file: # Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/server-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/server-key.pem" ) # TCP listener module( load="imtcp" PermittedPeer=["client1.example.com", "client2.example.com"] StreamDriver.AuthMode="x509/name" StreamDriver.Mode="1" StreamDriver.Name="ossl" ) # Start up listener at port 514 input( type="imtcp" port="514" ) Note If you prefer the GnuTLS driver, use the StreamDriver.Name="gtls" configuration option. See the documentation installed with the rsyslog-doc package for more information about less strict authentication modes than x509/name . Save the changes to the /etc/rsyslog.d/securelogser.conf file. Verify the syntax of the /etc/rsyslog.conf file and any files in the /etc/rsyslog.d/ directory: Make sure the rsyslog service is running and enabled on the logging server: Restart the rsyslog service: Optional: If Rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Configure clients for sending encrypted logs to the server: On a client system, create a new file in the /etc/rsyslog.d/ directory named, for example, securelogcli.conf . Add the following lines to the /etc/rsyslog.d/securelogcli.conf file: # Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/client-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/client-key.pem" ) # Set up the action for all messages *.* action( type="omfwd" StreamDriver="ossl" StreamDriverMode="1" StreamDriverPermittedPeers="server.example.com" StreamDriverAuthMode="x509/name" target="server.example.com" port="514" protocol="tcp" ) Note If you prefer the GnuTLS driver, use the StreamDriver.Name="gtls" configuration option. Save the changes to the /etc/rsyslog.d/securelogcli.conf file. Verify the syntax of the /etc/rsyslog.conf file and other files in the /etc/rsyslog.d/ directory: Make sure the rsyslog service is running and enabled on the logging server: Restart the rsyslog service: Optional: If Rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the /var/log/messages log, for example: Where <hostname> is the hostname of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources certtool(1) , openssl(1) , update-ca-trust(8) , rsyslogd(8) , and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package at /usr/share/doc/rsyslog/html/index.html . Using the logging system role with TLS . 8.1.6. Configuring a server for receiving remote logging information over UDP The Rsyslog application enables you to configure a system to receive logging information from remote systems. To use remote logging through UDP, configure both the server and the client. The receiving server collects and analyzes the logs sent by one or more client systems. By default, rsyslog uses UDP on port 514 to receive log information from remote systems. Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client systems over the UDP protocol. Prerequisites Rsyslog is installed on the server system. You are logged in as root on the server. The policycoreutils-python-utils package is installed for the optional step using the semanage command. The firewalld service is running. Procedure Optional: To use a different port for rsyslog traffic than the default port 514 : Add the syslogd_port_t SELinux type to the SELinux policy configuration, replacing portno with the port number you want rsyslog to use: Configure firewalld to allow incoming rsyslog traffic, replacing portno with the port number and zone with the zone you want rsyslog to use: Reload the firewall rules: Create a new .conf file in the /etc/rsyslog.d/ directory, for example, remotelogserv.conf , and insert the following content: # Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides UDP syslog reception module(load="imudp") # This ruleset processes remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imudp" port="514" ruleset="remote1") Where 514 is the port number rsyslog uses by default. You can specify a different port instead. Verify the syntax of the /etc/rsyslog.conf file and all .conf files in the /etc/rsyslog.d/ directory: Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Additional resources rsyslogd(8) , rsyslog.conf(5) , semanage(8) , and firewall-cmd(1) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 8.1.7. Configuring remote logging to a server over UDP You can configure a system for forwarding log messages to a server over the UDP protocol. The omfwd plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it. Prerequisites The rsyslog package is installed on the client systems that should report to the server. You have configured the server for remote logging as described in Configuring a server for receiving remote logging information over UDP . Procedure Create a new .conf file in the /etc/rsyslog.d/ directory, for example, 10-remotelogcli.conf , and insert the following content: Where: The queue.type="linkedlist" setting enables a LinkedList in-memory queue. The queue.filename setting defines a disk storage. The backup files are created with the example_fwd prefix in the working directory specified by the preceding global workDirectory directive. The action.resumeRetryCount -1 setting prevents rsyslog from dropping messages when retrying to connect if the server is not responding. The enabled queue.saveOnShutdown="on" setting saves in-memory data if rsyslog shuts down. The portno value is the port number you want rsyslog to use. The default value is 514 . The last line forwards all received messages to the logging server, port specification is optional. With this configuration, rsyslog sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only if rsyslog runs out of the configured memory queue space or needs to shut down, which benefits the system performance. Note Rsyslog processes configuration files /etc/rsyslog.d/ in the lexical order. Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the /var/log/remote/msg/ hostname /root.log log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package at /usr/share/doc/rsyslog/html/index.html 8.1.8. Load balancing helper in Rsyslog When used in a cluster, you can improve Rsyslog load balancing by modifying the RebindInterval setting. RebindInterval specifies an interval at which the current connection is broken and is re-established. This setting applies to TCP, UDP, and RELP traffic. The load balancers perceive it as a new connection and forward the messages to another physical target system. RebindInterval is helpful in scenarios when a target system has changed its IP address. The Rsyslog application caches the IP address when the connection is established, therefore, the messages are sent to the same server. If the IP address changes, the UDP packets are lost until the Rsyslog service restarts. Re-establishing the connection ensures the IP is resolved by DNS again. Example usage of RebindInterval for TCP, UDP, and RELP traffic 8.1.9. Configuring reliable remote logging With the Reliable Event Logging Protocol (RELP), you can send and receive syslog messages over TCP with a much reduced risk of message loss. RELP provides reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable. To use RELP, configure the imrelp input module, which runs on the server and receives the logs, and the omrelp output module, which runs on the client and sends logs to the logging server. Prerequisites You have installed the rsyslog , librelp , and rsyslog-relp packages on the server and the client systems. The specified port is permitted in SELinux and open in the firewall. Procedure Configure the client system for reliable remote logging: On the client system, create a new .conf file in the /etc/rsyslog.d/ directory named, for example, relpclient.conf , and insert the following content: Where: target_IP is the IP address of the logging server. target_port is the port of the logging server. Save the changes to the /etc/rsyslog.d/relpclient.conf file. Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Configure the server system for reliable remote logging: On the server system, create a new .conf file in the /etc/rsyslog.d/ directory named, for example, relpserv.conf , and insert the following content: Where: log_path specifies the path for storing messages. target_port is the port of the logging server. Use the same value as in the client configuration file. Save the changes to the /etc/rsyslog.d/relpserv.conf file. Restart the rsyslog service. Optional: If rsyslog is not enabled, ensure the rsyslog service starts automatically after reboot: Verification To verify that the client system sends messages to the server, follow these steps: On the client system, send a test message: On the server system, view the log at the specified log_path , for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources rsyslogd(8) and rsyslog.conf(5) man pages on your system Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file 8.1.10. Supported Rsyslog modules To expand the functionality of the Rsyslog application, you can use specific modules. Modules provide additional inputs (Input Modules), outputs (Output Modules), and other functionalities. A module can also provide additional configuration directives that become available after you load the module. You can list the input and output modules installed on your system by entering the following command: You can view the list of all available rsyslog modules in the /usr/share/doc/rsyslog/html/configuration/modules/idx_output.html file after you install the rsyslog-doc package. 8.1.11. Configuring the netconsole service to log kernel messages to a remote host When logging to disk or using a serial console is not possible, you can use the netconsole kernel module and the same-named service to log kernel messages over a network to a remote rsyslog service. Prerequisites A system log service, such as rsyslog is installed on the remote host. The remote system log service is configured to receive incoming log entries from this host. Procedure Install the netconsole-service package: Edit the /etc/sysconfig/netconsole file and set the SYSLOGADDR parameter to the IP address of the remote host: Enable and start the netconsole service: Verification Display the /var/log/messages file on the remote system log server. 8.1.12. Additional resources Documentation installed with the rsyslog-doc package in the /usr/share/doc/rsyslog/html/index.html file rsyslog.conf(5) and rsyslogd(8) man pages on your system Configuring system logging without journald or with minimized journald usage Knowledgebase article Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article The Using the Logging system role chapter 8.2. Using the logging system role As a system administrator, you can use the logging system role to configure a Red Hat Enterprise Linux host as a logging server to collect logs from many client systems. 8.2.1. Filtering local log messages by using the logging RHEL system role You can use the property-based filter of the logging RHEL system role to filter your local log messages based on various conditions. As a result, you can achieve for example: Log clarity: In a high-traffic environment, logs can grow rapidly. The focus on specific messages, like errors, can help to identify problems faster. Optimized system performance: Excessive amount of logs is usually connected with system performance degradation. Selective logging for only the important events can prevent resource depletion, which enables your systems to run more efficiently. Enhanced security: Efficient filtering through security messages, like system errors and failed logins, helps to capture only the relevant logs. This is important for detecting breaches and meeting compliance standards. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1] The settings specified in the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: files option supports storing logs in the local files, usually in the /var/log/ directory. The property: msg ; property: contains ; and property_value: error options specify that all logs that contain the error string are stored in the /var/log/errors.log file. The property: msg ; property: !contains ; and property_value: error options specify that all other logs are put in the /var/log/others.log file. You can replace the error value with the string by which you want to filter. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [files_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [files_output0, files_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the managed node, test the syntax of the /etc/rsyslog.conf file: On the managed node, verify that the system sends messages that contain the error string to the log: Send a test message: View the /var/log/errors.log log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) man pages on your system 8.2.2. Applying a remote logging solution by using the logging RHEL system role You can use the logging RHEL system role to configure a remote logging solution, where one or more clients take logs from the systemd-journal service and forward them to a remote server. The server receives remote input from the remote_rsyslog and remote_files configurations, and outputs the logs to local files in directories named by remote host names. As a result, you can cover use cases where you need for example: Centralized log management: Collecting, accessing, and managing log messages of multiple machines from a single storage point simplifies day-to-day monitoring and troubleshooting tasks. Also, this use case reduces the need to log into individual machines to check the log messages. Enhanced security: Storing log messages in one central place increases chances they are in a secure and tamper-proof environment. Such an environment makes it easier to detect and respond to security incidents more effectively and to meet audit requirements. Improved efficiency in log analysis: Correlating log messages from multiple systems is important for fast troubleshooting of complex problems that span multiple machines or services. That way you can quickly analyze and cross-reference events from different sources. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Define the ports in the SELinux policy of the server or client system and open the firewall for those ports. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, see modify the SELinux policy on the client and server systems . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1] The settings specified in the first play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: remote option covers remote inputs from the other logging system over the network. The udp_ports: [ 601 ] option defines a list of UDP port numbers to monitor. The tcp_ports: [ 601 ] option defines a list of TCP port numbers to monitor. If both udp_ports and tcp_ports is set, udp_ports is used and tcp_ports is dropped. logging_outputs Defines a list of logging output dictionaries. The type: remote_files option makes output store logs to the local files per remote host and program name originated the logs. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [remote_udp_input, remote_tcp_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [remote_files_output] option specifies a list of outputs, to which the logs are sent. The settings specified in the second play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: forwards option supports sending logs to the remote logging server over the network. The severity: info option refers to log messages of the informative importance. The facility: mail option refers to the type of system program that is generating the log message. The target: <host1.example.com> option specifies the hostname of the remote logging server. The udp_port: 601 / tcp_port: 601 options define the UDP/TCP ports on which the remote logging server listens. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [basic_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [forward_output0, forward_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On both the client and the server system, test the syntax of the /etc/rsyslog.conf file: Verify that the client system sends messages to the server: On the client system, send a test message: On the server system, view the /var/log/ <host2.example.com> /messages log, for example: Where <host2.example.com> is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 8.2.3. Using the logging RHEL system role with TLS Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network. You can use the logging RHEL system role to configure a secure transfer of log messages, where one or more clients take logs from the systemd-journal service and transfer them to a remote server while using TLS. Typically, TLS for transferring logs in a remote logging solution is used when sending sensitive data over less trusted or public networks, such as the Internet. Also, by using certificates in TLS you can ensure that the client is forwarding logs to the correct and trusted server. This prevents attacks like "man-in-the-middle". 8.2.3.1. Configuring client logging with TLS You can use the logging RHEL system role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically when the logging_certificates variable is set. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file /usr/share/doc/rhel-system-roles/certificate/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 8.2.3.2. Configuring server logging with TLS You can use the logging RHEL system role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the server group in the Ansible inventory. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 8.2.4. Using the logging RHEL system roles with RELP Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss. The RELP sender transfers log entries in the form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery. You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system. To achieve that use case, you can use the logging RHEL system role to configure the logging system to reliably send and receive log entries. 8.2.4.1. Configuring client logging with RELP You can use the logging RHEL system role to configure a transfer of log messages stored locally to the remote logging system with RELP. This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client] The settings specified in the example playbook include the following: target This is a required parameter that specifies the host name where the remote logging system is running. port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_servers List of servers that will be allowed by the logging client to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 8.2.4.2. Configuring server logging with RELP You can use the logging RHEL system role to configure a server for receiving log messages from the remote logging system with RELP. This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output The settings specified in the example playbook include the following: port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_clients List of clients that will be allowed by the logging server to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 8.2.5. Additional resources Preparing a control node and managed nodes to use RHEL system roles Documentation installed with the rhel-system-roles package in /usr/share/ansible/roles/rhel-system-roles.logging/README.html . RHEL system roles ansible-playbook(1) man page on your system | [
"yum install rsyslog-doc",
"firefox /usr/share/doc/rsyslog/html/index.html &",
"semanage port -a -t syslogd_port_t -p tcp 30514",
"firewall-cmd --zone= <zone-name> --permanent --add-port=30514/tcp success firewall-cmd --reload",
"Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides TCP syslog reception module(load=\"imtcp\") Adding this ruleset to process remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imtcp\" port=\"30514\" ruleset=\"remote1\")",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\"example_fwd\" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\"example.com\" port=\"30514\" protocol=\"tcp\" )",
"systemctl restart rsyslog",
"logger test",
"cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test",
"Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/server-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/server-key.pem\" ) TCP listener module( load=\"imtcp\" PermittedPeer=[\"client1.example.com\", \"client2.example.com\"] StreamDriver.AuthMode=\"x509/name\" StreamDriver.Mode=\"1\" StreamDriver.Name=\"ossl\" ) Start up listener at port 514 input( type=\"imtcp\" port=\"514\" )",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/client-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/client-key.pem\" ) Set up the action for all messages *.* action( type=\"omfwd\" StreamDriver=\"ossl\" StreamDriverMode=\"1\" StreamDriverPermittedPeers=\"server.example.com\" StreamDriverAuthMode=\"x509/name\" target=\"server.example.com\" port=\"514\" protocol=\"tcp\" )",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.",
"systemctl status rsyslog",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/ <hostname> /root.log Feb 25 03:53:17 <hostname> root[6064]: test",
"semanage port -a -t syslogd_port_t -p udp portno",
"firewall-cmd --zone= zone --permanent --add-port= portno /udp success firewall-cmd --reload",
"firewall-cmd --reload",
"Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides UDP syslog reception module(load=\"imudp\") This ruleset processes remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imudp\" port=\"514\" ruleset=\"remote1\")",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\" example_fwd \" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\" example.com \" port=\" portno \" protocol=\"udp\" )",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test",
"action(type=\"omfwd\" protocol=\"tcp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omfwd\" protocol=\"udp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omrelp\" RebindInterval=\"250\" target=\" example.com \" port=\"6514\" ...)",
"module(load=\"omrelp\") *.* action(type=\"omrelp\" target=\"_target_IP_\" port=\"_target_port_\")",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"ruleset(name=\"relp\"){ *.* action(type=\"omfile\" file=\"_log_path_\") } module(load=\"imrelp\") input(type=\"imrelp\" port=\"_target_port_\" ruleset=\"relp\")",
"systemctl restart rsyslog",
"systemctl enable rsyslog",
"logger test",
"cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test",
"ls /usr/lib64/rsyslog/{i,o}m *",
"yum install netconsole-service",
"SYSLOGADDR= 192.0.2.1",
"systemctl enable --now netconsole",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: \"!contains\" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run rsyslogd: End of config validation run. Bye.",
"logger error",
"cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.",
"logger test",
"cat /var/log/ <host2.example.com> /messages Aug 5 13:48:31 <host2.example.com> root[6778]: test",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/configuring-logging_configuring-basic-system-settings |
Chapter 44. KIE Server | Chapter 44. KIE Server KIE Server is the server where the rules and other artifacts for Red Hat Process Automation Manager are stored and run. KIE Server is a standalone built-in component that can be used to instantiate and execute rules through interfaces available for REST, Java Message Service (JMS), or Java client-side applications, as well as to manage processes, jobs, and Red Hat build of OptaPlanner functionality through solvers. Created as a web deployable WAR file, KIE Server can be deployed on any web container. The current version of KIE Server is included with default extensions for both Red Hat Decision Manager and Red Hat Process Automation Manager. KIE Server has a low footprint with minimal memory consumption and therefore can be deployed easily on a cloud instance. Each instance of this server can open and instantiate multiple containers, which enables you to execute multiple rule services in parallel. KIE Server can be integrated with other application servers, such as Oracle WebLogic Server or IBM WebSphere Application Server, to streamline Red Hat Process Automation Manager application management. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/kie-server-con_kie-server-on-was |
Governance | Governance Red Hat Advanced Cluster Management for Kubernetes 2.12 Governance | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/governance/index |
Chapter 7. Working with containers | Chapter 7. Working with containers 7.1. Understanding Containers The basic units of OpenShift Container Platform applications are called containers . Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service (often called a "micro-service"), such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. OpenShift Container Platform and Kubernetes add the ability to orchestrate containers across multi-host installations. 7.1.1. About containers and RHEL kernel memory Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the kmem_cache in the RHEL kernel. The RHEL kernel creates a kmem_cache for each cgroup. For added performance, the kmem_cache contains a cpu_cache , and a node cache for any NUMA nodes. These caches all consume kernel memory. The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OpenShift Container Platform containers to exceed the configured memory limits, resulting in the container being killed. To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the kmem_cache , where nproc is the number of processing units available that are reported by the nproc command. The lower limit of container requests should be this value plus the container memory requirements: USD(nproc) X 1/2 MiB 7.1.2. About the container engine and container runtime A container engine is a piece of software that processes user requests, including command line options and image pulls. The container engine uses a container runtime , also called a lower-level container runtime , to run and manage the components required to deploy and operate containers. You likely will not need to interact with the container engine or container runtime. Note The OpenShift Container Platform documentation uses the term container runtime to refer to the lower-level container runtime. Other documentation can refer to the container engine as the container runtime. OpenShift Container Platform uses CRI-O as the container engine and runC or crun as the container runtime. The default container runtime is runC. Both container runtimes adhere to the Open Container Initiative (OCI) runtime specifications. CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OpenShift Container Platform cluster node. runC, developed by Docker and maintained by the Open Container Project, is a lightweight, portable container runtime written in Go. crun, developed by Red Hat, is a fast and low-memory container runtime fully written in C. As of OpenShift Container Platform 4.12, you can select between the two. Important crun container runtime support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . crun has several improvements over runC, including: Smaller binary Quicker processing Lower memory footprint runC has some benefits over crun, including: Most popular OCI container runtime. Longer tenure in production. Default container runtime of CRI-O. You can move between the two container runtimes as needed. For information on setting which container runtime to use, see Creating a ContainerRuntimeConfig CR to edit CRI-O parameters . 7.2. Using Init Containers to perform tasks before a pod is deployed OpenShift Container Platform provides init containers , which are specialized containers that run before application containers and can contain utilities or setup scripts not present in an app image. 7.2.1. Understanding Init Containers You can use an Init Container resource to perform tasks before the rest of a pod is deployed. A pod can have Init Containers in addition to application containers. Init containers allow you to reorganize setup scripts and binding code. An Init Container can: Contain and run utilities that are not desirable to include in the app Container image for security reasons. Contain utilities or custom code for setup that is not present in an app image. For example, there is no requirement to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup. Use Linux namespaces so that they have different filesystem views from app containers, such as access to secrets that application containers are not able to access. Each Init Container must complete successfully before the one is started. So, Init Containers provide an easy way to block or delay the startup of app containers until some set of preconditions are met. For example, the following are some ways you can use Init Containers: Wait for a service to be created with a shell command like: for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1 Register this pod with a remote server from the downward API with a command like: USD curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()' Wait for some time before starting the app Container with a command like sleep 60 . Clone a git repository into a volume. Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app Container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja. See the Kubernetes documentation for more information. 7.2.2. Creating Init Containers The following example outlines a simple pod which has two Init Containers. The first waits for myservice and the second waits for mydb . After both containers complete, the pod begins. Procedure Create the pod for the Init Container: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] - name: init-mydb image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] # ... Create the pod: USD oc create -f myapp.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s The pod status, Init:0/2 , indicates it is waiting for the two services. Create the myservice service. Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376 Create the pod: USD oc create -f myservice.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s The pod status, Init:1/2 , indicates it is waiting for one service, in this case the mydb service. Create the mydb service: Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377 Create the pod: USD oc create -f mydb.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m The pod status indicated that it is no longer waiting for the services and is running. 7.3. Using volumes to persist container data Files in a container are ephemeral. As such, when a container crashes or stops, the data is lost. You can use volumes to persist the data used by the containers in a pod. A volume is directory, accessible to the Containers in a pod, where data is stored for the life of the pod. 7.3.1. Understanding volumes Volumes are mounted file systems available to pods and their containers which may be backed by a number of host-local or network attached storage endpoints. Containers are not persistent by default; on restart, their contents are cleared. To ensure that the file system on the volume contains no errors and, if errors are present, to repair them when possible, OpenShift Container Platform invokes the fsck utility prior to the mount utility. This occurs when either adding a volume or updating an existing volume. The simplest volume type is emptyDir , which is a temporary directory on a single machine. Administrators may also allow you to request a persistent volume that is automatically attached to your pods. Note emptyDir volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. 7.3.2. Working with volumes using the OpenShift Container Platform CLI You can use the CLI command oc set volume to add and remove volumes and volume mounts for any object that has a pod template like replication controllers or deployment configs. You can also list volumes in pods or any object that has a pod template. The oc set volume command uses the following general syntax: USD oc set volume <object_selection> <operation> <mandatory_parameters> <options> Object selection Specify one of the following for the object_selection parameter in the oc set volume command: Table 7.1. Object Selection Syntax Description Example <object_type> <name> Selects <name> of type <object_type> . deploymentConfig registry <object_type> / <name> Selects <name> of type <object_type> . deploymentConfig/registry <object_type> --selector= <object_label_selector> Selects resources of type <object_type> that matched the given label selector. deploymentConfig --selector="name=registry" <object_type> --all Selects all resources of type <object_type> . deploymentConfig --all -f or --filename= <file_name> File name, directory, or URL to file to use to edit the resource. -f registry-deployment-config.json Operation Specify --add or --remove for the operation parameter in the oc set volume command. Mandatory parameters Any mandatory parameters are specific to the selected operation and are discussed in later sections. Options Any options are specific to the selected operation and are discussed in later sections. 7.3.3. Listing volumes and volume mounts in a pod You can list volumes and volume mounts in pods or pod templates: Procedure To list volumes: USD oc set volume <object_type>/<name> [options] List volume supported options: Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' For example: To list all volumes for pod p1 : USD oc set volume pod/p1 To list volume v1 defined on all deployment configs: USD oc set volume dc --all --name=v1 7.3.4. Adding volumes to a pod You can add volumes and volume mounts to a pod. Procedure To add a volume, a volume mount, or both to pod templates: USD oc set volume <object_type>/<name> --add [options] Table 7.2. Supported Options for Adding Volumes Option Description Default --name Name of the volume. Automatically generated, if not specified. -t, --type Name of the volume source. Supported values: emptyDir , hostPath , secret , configmap , persistentVolumeClaim or projected . emptyDir -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' -m, --mount-path Mount path inside the selected containers. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --path Host path. Mandatory parameter for --type=hostPath . Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --secret-name Name of the secret. Mandatory parameter for --type=secret . --configmap-name Name of the configmap. Mandatory parameter for --type=configmap . --claim-name Name of the persistent volume claim. Mandatory parameter for --type=persistentVolumeClaim . --source Details of volume source as a JSON string. Recommended if the desired volume source is not supported by --type . -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To add a new volume source emptyDir to the registry DeploymentConfig object: USD oc set volume dc/registry --add Tip You can alternatively apply the following YAML to add the volume: Example 7.1. Sample deployment config with an added volume kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP 1 Add the volume source emptyDir . To add volume v1 with secret secret1 for replication controller r1 and mount inside the containers at /data : USD oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data Tip You can alternatively apply the following YAML to add the volume: Example 7.2. Sample replication controller with added volume and secret kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data 1 Add the volume and secret. 2 Add the container mount path. To add existing persistent volume v1 with claim name pvc1 to deployment configuration dc.json on disk, mount the volume on container c1 at /data , and update the DeploymentConfig object on the server: USD oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \ --claim-name=pvc1 --mount-path=/data --containers=c1 Tip You can alternatively apply the following YAML to add the volume: Example 7.3. Sample deployment config with persistent volume added kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data 1 Add the persistent volume claim named `pvc1. 2 Add the container mount path. To add a volume v1 based on Git repository https://github.com/namespace1/project1 with revision 5125c45f9f563 for all replication controllers: USD oc set volume rc --all --add --name=v1 \ --source='{"gitRepo": { "repository": "https://github.com/namespace1/project1", "revision": "5125c45f9f563" }}' 7.3.5. Updating volumes and volume mounts in a pod You can modify the volumes and volume mounts in a pod. Procedure Updating existing volumes using the --overwrite option: USD oc set volume <object_type>/<name> --add --overwrite [options] For example: To replace existing volume v1 for replication controller r1 with existing persistent volume claim pvc1 : USD oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 Tip You can alternatively apply the following YAML to replace the volume: Example 7.4. Sample replication controller with persistent volume claim named pvc1 kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data 1 Set persistent volume claim to pvc1 . To change the DeploymentConfig object d1 mount point to /opt for volume v1 : USD oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt Tip You can alternatively apply the following YAML to change the mount point: Example 7.5. Sample deployment config with mount point set to opt . kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt 1 Set the mount point to /opt . 7.3.6. Removing volumes and volume mounts from a pod You can remove a volume or volume mount from a pod. Procedure To remove a volume from pod templates: USD oc set volume <object_type>/<name> --remove [options] Table 7.3. Supported options for removing volumes Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' --confirm Indicate that you want to remove multiple volumes at once. -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To remove a volume v1 from the DeploymentConfig object d1 : USD oc set volume dc/d1 --remove --name=v1 To unmount volume v1 from container c1 for the DeploymentConfig object d1 and remove the volume v1 if it is not referenced by any containers on d1 : USD oc set volume dc/d1 --remove --name=v1 --containers=c1 To remove all volumes for replication controller r1 : USD oc set volume rc/r1 --remove --confirm 7.3.7. Configuring volumes for multiple uses in a pod You can configure a volume to allows you to share one volume for multiple uses in a single pod using the volumeMounts.subPath property to specify a subPath value inside a volume instead of the volume's root. Note You cannot add a subPath parameter to an existing scheduled pod. Procedure To view the list of files in the volume, run the oc rsh command: USD oc rsh <pod> Example output sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3 Specify the subPath : Example Pod spec with subPath parameter apiVersion: v1 kind: Pod metadata: name: my-site spec: containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data 1 Databases are stored in the mysql folder. 2 HTML content is stored in the html folder. 7.4. Mapping volumes using projected volumes A projected volume maps several existing volume sources into the same directory. The following types of volume sources can be projected: Secrets Config Maps Downward API Note All sources are required to be in the same namespace as the pod. 7.4.1. Understanding projected volumes Projected volumes can map any combination of these volume sources into a single directory, allowing the user to: automatically populate a single volume with the keys from multiple secrets, config maps, and with downward API information, so that I can synthesize a single directory with various sources of information; populate a single volume with the keys from multiple secrets, config maps, and with downward API information, explicitly specifying paths for each item, so that I can have full control over the contents of that volume. Important When the RunAsUser permission is set in the security context of a Linux-based pod, the projected files have the correct permissions set, including container user ownership. However, when the Windows equivalent RunAsUsername permission is set in a Windows pod, the kubelet is unable to correctly set ownership on the files in the projected volume. Therefore, the RunAsUsername permission set in the security context of a Windows pod is not honored for Windows projected volumes running in OpenShift Container Platform. The following general scenarios show how you can use projected volumes. Config map, secrets, Downward API. Projected volumes allow you to deploy containers with configuration data that includes passwords. An application using these resources could be deploying Red Hat OpenStack Platform (RHOSP) on Kubernetes. The configuration data might have to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector metadata.labels can be used to produce the correct RHOSP configs. Config map + secrets. Projected volumes allow you to deploy containers involving configuration data and passwords. For example, you might execute a config map with some sensitive encrypted tasks that are decrypted using a vault password file. ConfigMap + Downward API. Projected volumes allow you to generate a config including the pod name (available via the metadata.name selector). This application can then pass the pod name along with requests to easily determine the source without using IP tracking. Secrets + Downward API. Projected volumes allow you to use a secret as a public key to encrypt the namespace of the pod (available via the metadata.namespace selector). This example allows the Operator to use the application to deliver the namespace information securely without using an encrypted transport. 7.4.1.1. Example Pod specs The following are examples of Pod specs for creating projected volumes. Pod with a secret, a Downward API, and a config map apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: "/projected-volume" 2 readOnly: true 3 volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "cpu_limit" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11 1 Add a volumeMounts section for each container that needs the secret. 2 Specify a path to an unused directory where the secret will appear. 3 Set readOnly to true . 4 Add a volumes block to list each projected volume source. 5 Specify any name for the volume. 6 Set the execute permission on the files. 7 Add a secret. Enter the name of the secret object. Each secret you want to use must be listed. 8 Specify the path to the secrets file under the mountPath . Here, the secrets file is in /projected-volume/my-group/my-username . 9 Add a Downward API source. 10 Add a ConfigMap source. 11 Set the mode for the specific projection Note If there are multiple containers in the pod, each container needs a volumeMounts section, but only one volumes section is needed. Pod with multiple secrets with a non-default permission mode set apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511 Note The defaultMode can only be specified at the projected level and not for each volume source. However, as illustrated above, you can explicitly set the mode for each individual projection. 7.4.1.2. Pathing Considerations Collisions Between Keys when Configured Paths are Identical If you configure any keys with the same path, the pod spec will not be accepted as valid. In the following example, the specified path for mysecret and myconfigmap are the same: apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data Consider the following situations related to the volume file paths. Collisions Between Keys without Configured Paths The only run-time validation that can occur is when all the paths are known at pod creation, similar to the above scenario. Otherwise, when a conflict occurs the most recent specified resource will overwrite anything preceding it (this is true for resources that are updated after pod creation as well). Collisions when One Path is Explicit and the Other is Automatically Projected In the event that there is a collision due to a user specified path matching data that is automatically projected, the latter resource will overwrite anything preceding it as before 7.4.2. Configuring a Projected Volume for a Pod When creating projected volumes, consider the volume file path situations described in Understanding projected volumes . The following example shows how to use a projected volume to mount an existing secret volume source. The steps can be used to create a user name and password secrets from local files. You then create a pod that runs one container, using a projected volume to mount the secrets into the same shared directory. The user name and password values can be any valid string that is base64 encoded. The following example shows admin in base64: USD echo -n "admin" | base64 Example output YWRtaW4= The following example shows the password 1f2d1e2e67df in base64: USD echo -n "1f2d1e2e67df" | base64 Example output MWYyZDFlMmU2N2Rm Procedure To use a projected volume to mount an existing secret volume source. Create the secret: Create a YAML file similar to the following, replacing the password and user information as appropriate: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= Use the following command to create the secret: USD oc create -f <secrets-filename> For example: USD oc create -f secret.yaml Example output secret "mysecret" created You can check that the secret was created using the following commands: USD oc get secret <secret-name> For example: USD oc get secret mysecret Example output NAME TYPE DATA AGE mysecret Opaque 2 17h USD oc get secret <secret-name> -o yaml For example: USD oc get secret mysecret -o yaml apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: "2107" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque Create a pod with a projected volume. Create a YAML file similar to the following, including a volumes section: kind: Pod metadata: name: test-projected-volume spec: containers: - name: test-projected-volume image: busybox args: - sleep - "86400" volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1 1 The name of the secret you created. Create the pod from the configuration file: USD oc create -f <your_yaml_file>.yaml For example: USD oc create -f secret-pod.yaml Example output pod "test-projected-volume" created Verify that the pod container is running, and then watch for changes to the pod: USD oc get pod <name> For example: USD oc get pod test-projected-volume The output should appear similar to the following: Example output NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s In another terminal, use the oc exec command to open a shell to the running container: USD oc exec -it <pod> <command> For example: USD oc exec -it test-projected-volume -- /bin/sh In your shell, verify that the projected-volumes directory contains your projected sources: / # ls Example output bin home root tmp dev proc run usr etc projected-volume sys var 7.5. Allowing containers to consume API objects The Downward API is a mechanism that allows containers to consume information about API objects without coupling to OpenShift Container Platform. Such information includes the pod's name, namespace, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. 7.5.1. Expose pod information to Containers using the Downward API The Downward API contains such information as the pod's name, project, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. Fields within the pod are selected using the FieldRef API type. FieldRef has two fields: Field Description fieldPath The path of the field to select, relative to the pod. apiVersion The API version to interpret the fieldPath selector within. Currently, the valid selectors in the v1 API include: Selector Description metadata.name The pod's name. This is supported in both environment variables and volumes. metadata.namespace The pod's namespace.This is supported in both environment variables and volumes. metadata.labels The pod's labels. This is only supported in volumes and not in environment variables. metadata.annotations The pod's annotations. This is only supported in volumes and not in environment variables. status.podIP The pod's IP. This is only supported in environment variables and not volumes. The apiVersion field, if not specified, defaults to the API version of the enclosing pod template. 7.5.2. Understanding how to consume container values using the downward API You containers can consume API values using environment variables or a volume plugin. Depending on the method you choose, containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Annotations and labels are available using only a volume plugin. 7.5.2.1. Consuming container values using environment variables When using a container's environment variables, use the EnvVar type's valueFrom field (of type EnvVarSource ) to specify that the variable's value should come from a FieldRef source instead of the literal value specified by the value field. Only constant attributes of the pod can be consumed this way, as environment variables cannot be updated once a process is started in a way that allows the process to be notified that the value of a variable has changed. The fields supported using environment variables are: Pod name Pod project/namespace Procedure Create a new pod spec that contains the environment variables you want the container to consume: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_POD_NAME and MY_POD_NAMESPACE values: USD oc logs -p dapi-env-test-pod 7.5.2.2. Consuming container values using a volume plugin You containers can consume API values using a volume plugin. Containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Procedure To use the volume plugin: Create a new pod spec that contains the environment variables you want the container to consume: Create a volume-pod.yaml file similar to the following: kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: "345" annotation2: "456" spec: containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: ["sh", "-c", "cat /tmp/etc/pod_labels /tmp/etc/pod_annotations"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml Verification Check the container's logs and verify the presence of the configured fields: USD oc logs -p dapi-volume-test-pod Example output cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api 7.5.3. Understanding how to consume container resources using the Downward API When creating pods, you can use the Downward API to inject information about computing resource requests and limits so that image and application authors can correctly create an image for specific environments. You can do this using environment variable or a volume plugin. 7.5.3.1. Consuming container resources using environment variables When creating pods, you can use the Downward API to inject information about computing resource requests and limits using environment variables. When creating the pod configuration, specify environment variables that correspond to the contents of the resources field in the spec.container field. Note If the resource limits are not included in the container configuration, the downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ "/bin/sh", "-c", "env" ] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml 7.5.3.2. Consuming container resources using a volume plugin When creating pods, you can use the Downward API to inject information about computing resource requests and limits using a volume plugin. When creating the pod configuration, use the spec.volumes.downwardAPI.items field to describe the desired resources that correspond to the spec.resources field. Note If the resource limits are not included in the container configuration, the Downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: ["sh", "-c", "while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done"] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "cpu_limit" resourceFieldRef: containerName: client-container resource: limits.cpu - path: "cpu_request" resourceFieldRef: containerName: client-container resource: requests.cpu - path: "mem_limit" resourceFieldRef: containerName: client-container resource: limits.memory - path: "mem_request" resourceFieldRef: containerName: client-container resource: requests.memory # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml 7.5.4. Consuming secrets using the Downward API When creating pods, you can use the downward API to inject secrets so image and application authors can create an image for specific environments. Procedure Create a secret to inject: Create a secret.yaml file similar to the following: apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth Create the secret object from the secret.yaml file: USD oc create -f secret.yaml Create a pod that references the username field from the above Secret object: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_SECRET_USERNAME value: USD oc logs -p dapi-env-test-pod 7.5.5. Consuming configuration maps using the Downward API When creating pods, you can use the Downward API to inject configuration map values so image and application authors can create an image for specific environments. Procedure Create a config map with the values to inject: Create a configmap.yaml file similar to the following: apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue Create the config map from the configmap.yaml file: USD oc create -f configmap.yaml Create a pod that references the above config map: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey restartPolicy: Always # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_CONFIGMAP_VALUE value: USD oc logs -p dapi-env-test-pod 7.5.6. Referencing environment variables When creating pods, you can reference the value of a previously defined environment variable by using the USD() syntax. If the environment variable reference can not be resolved, the value will be left as the provided string. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_ENV_VAR_REF_ENV value: USD oc logs -p dapi-env-test-pod 7.5.7. Escaping environment variable references When creating a pod, you can escape an environment variable reference by using a double dollar sign. The value will then be set to a single dollar sign version of the provided value. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_NEW_ENV value: USD oc logs -p dapi-env-test-pod 7.6. Copying files to or from an OpenShift Container Platform container You can use the CLI to copy local files to or from a remote directory in a container using the rsync command. 7.6.1. Understanding how to copy files The oc rsync command, or remote sync, is a useful tool for copying database archives to and from your pods for backup and restore purposes. You can also use oc rsync to copy source code changes into a running pod for development debugging, when the running pod supports hot reload of source files. USD oc rsync <source> <destination> [-c <container>] 7.6.1.1. Requirements Specifying the Copy Source The source argument of the oc rsync command must point to either a local directory or a pod directory. Individual files are not supported. When specifying a pod directory the directory name must be prefixed with the pod name: <pod name>:<dir> If the directory name ends in a path separator ( / ), only the contents of the directory are copied to the destination. Otherwise, the directory and its contents are copied to the destination. Specifying the Copy Destination The destination argument of the oc rsync command must point to a directory. If the directory does not exist, but rsync is used for copy, the directory is created for you. Deleting Files at the Destination The --delete flag may be used to delete any files in the remote directory that are not in the local directory. Continuous Syncing on File Change Using the --watch option causes the command to monitor the source path for any file system changes, and synchronizes changes when they occur. With this argument, the command runs forever. Synchronization occurs after short quiet periods to ensure a rapidly changing file system does not result in continuous synchronization calls. When using the --watch option, the behavior is effectively the same as manually invoking oc rsync repeatedly, including any arguments normally passed to oc rsync . Therefore, you can control the behavior via the same flags used with manual invocations of oc rsync , such as --delete . 7.6.2. Copying files to and from containers Support for copying local files to or from a container is built into the CLI. Prerequisites When working with oc rsync , note the following: rsync must be installed. The oc rsync command uses the local rsync tool, if present on the client machine and the remote container. If rsync is not found locally or in the remote container, a tar archive is created locally and sent to the container where the tar utility is used to extract the files. If tar is not available in the remote container, the copy will fail. The tar copy method does not provide the same functionality as oc rsync . For example, oc rsync creates the destination directory if it does not exist and only sends files that are different between the source and the destination. Note In Windows, the cwRsync client should be installed and added to the PATH for use with the oc rsync command. Procedure To copy a local directory to a pod directory: USD oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name> For example: USD oc rsync /home/user/source devpod1234:/src -c user-container To copy a pod directory to a local directory: USD oc rsync devpod1234:/src /home/user/source Example output USD oc rsync devpod1234:/src/status.txt /home/user/ 7.6.3. Using advanced Rsync features The oc rsync command exposes fewer command line options than standard rsync . In the case that you want to use a standard rsync command line option that is not available in oc rsync , for example the --exclude-from=FILE option, it might be possible to use standard rsync 's --rsh ( -e ) option or RSYNC_RSH environment variable as a workaround, as follows: USD rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> or: Export the RSYNC_RSH variable: USD export RSYNC_RSH='oc rsh' Then, run the rsync command: USD rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> Both of the above examples configure standard rsync to use oc rsh as its remote shell program to enable it to connect to the remote pod, and are an alternative to running oc rsync . 7.7. Executing remote commands in an OpenShift Container Platform container You can use the CLI to execute remote commands in an OpenShift Container Platform container. 7.7.1. Executing remote commands in containers Support for remote container command execution is built into the CLI. Procedure To run a command in a container: USD oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>] For example: USD oc exec mypod date Example output Thu Apr 9 02:21:53 UTC 2015 Important For security purposes , the oc exec command does not work when accessing privileged containers except when the command is executed by a cluster-admin user. 7.7.2. Protocol for initiating a remote command from a client Clients initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server: /proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command> In the above URL: <node_name> is the FQDN of the node. <namespace> is the project of the target pod. <pod> is the name of the target pod. <container> is the name of the target container. <command> is the desired command to be executed. For example: /proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date Additionally, the client can add parameters to the request to indicate if: the client should send input to the remote container's command (stdin). the client's terminal is a TTY. the remote container's command should send output from stdout to the client. the remote container's command should send output from stderr to the client. After sending an exec request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses HTTP/2 . The client creates one stream each for stdin, stdout, and stderr. To distinguish among the streams, the client sets the streamType header on the stream to one of stdin , stdout , or stderr . The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the remote command execution request. 7.8. Using port forwarding to access applications in a container OpenShift Container Platform supports port forwarding to pods. 7.8.1. Understanding port forwarding You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod. Support for port forwarding is built into the CLI: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] The CLI listens on each local port specified by the user, forwarding using the protocol described below. Ports may be specified using the following formats: 5000 The client listens on port 5000 locally and forwards to 5000 in the pod. 6000:5000 The client listens on port 6000 locally and forwards to 5000 in the pod. :5000 or 0:5000 The client selects a free local port and forwards to 5000 in the pod. OpenShift Container Platform handles port-forward requests from clients. Upon receiving a request, OpenShift Container Platform upgrades the response and waits for the client to create port-forwarding streams. When OpenShift Container Platform receives a new stream, it copies data between the stream and the pod's port. Architecturally, there are options for forwarding to a pod's port. The supported OpenShift Container Platform implementation invokes nsenter directly on the node host to enter the pod's network namespace, then invokes socat to copy data between the stream and the pod's port. However, a custom implementation could include running a helper pod that then runs nsenter and socat , so that those binaries are not required to be installed on the host. 7.8.2. Using port forwarding You can use the CLI to port-forward one or more local ports to a pod. Procedure Use the following command to listen on the specified port in a pod: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] For example: Use the following command to listen on ports 5000 and 6000 locally and forward data to and from ports 5000 and 6000 in the pod: USD oc port-forward <pod> 5000 6000 Example output Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000 Use the following command to listen on port 8888 locally and forward to 5000 in the pod: USD oc port-forward <pod> 8888:5000 Example output Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000 Use the following command to listen on a free port locally and forward to 5000 in the pod: USD oc port-forward <pod> :5000 Example output Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000 Or: USD oc port-forward <pod> 0:5000 7.8.3. Protocol for initiating port forwarding from a client Clients initiate port forwarding to a pod by issuing a request to the Kubernetes API server: In the above URL: <node_name> is the FQDN of the node. <namespace> is the namespace of the target pod. <pod> is the name of the target pod. For example: After sending a port forward request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses Hyptertext Transfer Protocol Version 2 (HTTP/2) . The client creates a stream with the port header containing the target port in the pod. All data written to the stream is delivered via the kubelet to the target pod and port. Similarly, all data sent from the pod for that forwarded connection is delivered back to the same stream in the client. The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the port forwarding request. 7.9. Using sysctls in containers Sysctl settings are exposed through Kubernetes, allowing users to modify certain kernel parameters at runtime. Only sysctls that are namespaced can be set independently on pods. If a sysctl is not namespaced, called node-level , you must use another method of setting the sysctl, such as by using the Node Tuning Operator. Network sysctls are a special category of sysctl. Network sysctls include: System-wide sysctls, for example net.ipv4.ip_local_port_range , that are valid for all networking. You can set these independently for each pod on a node. Interface-specific sysctls, for example net.ipv4.conf.IFNAME.accept_local , that only apply to a specific additional network interface for a given pod. You can set these independently for each additional network configuration. You set these by using a configuration in the tuning-cni after the network interfaces are created. Moreover, only those sysctls considered safe are whitelisted by default; you can manually enable other unsafe sysctls on the node to be available to the user. Additional resources If you are setting the sysctl and it is not node-level, you can find information on this procedure in the section Using the Node Tuning Operator . 7.9.1. About sysctls In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available from the /proc/sys/ virtual process file system. The parameters cover various subsystems, such as: kernel (common prefix: kernel. ) networking (common prefix: net. ) virtual memory (common prefix: vm. ) MDADM (common prefix: dev. ) More subsystems are described in Kernel documentation . To get a list of all parameters, run: USD sudo sysctl -a 7.9.2. Namespaced and node-level sysctls A number of sysctls are namespaced in the Linux kernels. This means that you can set them independently for each pod on a node. Being namespaced is a requirement for sysctls to be accessible in a pod context within Kubernetes. The following sysctls are known to be namespaced: kernel.shm* kernel.msg* kernel.sem fs.mqueue.* Additionally, most of the sysctls in the net.* group are known to be namespaced. Their namespace adoption differs based on the kernel version and distributor. Sysctls that are not namespaced are called node-level and must be set manually by the cluster administrator, either by means of the underlying Linux distribution of the nodes, such as by modifying the /etc/sysctls.conf file, or by using a daemon set with privileged containers. You can use the Node Tuning Operator to set node-level sysctls. Note Consider marking nodes with special sysctls as tainted. Only schedule pods onto them that need those sysctl settings. Use the taints and toleration feature to mark the nodes. 7.9.3. Safe and unsafe sysctls Sysctls are grouped into safe and unsafe sysctls. For system-wide sysctls to be considered safe, they must be namespaced. A namespaced sysctl ensures there is isolation between namespaces and therefore pods. If you set a sysctl for one pod it must not add any of the following: Influence any other pod on the node Harm the node health Gain CPU or memory resources outside of the resource limits of a pod Note Being namespaced alone is not sufficient for the sysctl to be considered safe. Any sysctl that is not added to the allowed list on OpenShift Container Platform is considered unsafe for OpenShift Container Platform. Unsafe sysctls are not allowed by default. For system-wide sysctls the cluster administrator must manually enable them on a per-node basis. Pods with disabled unsafe sysctls are scheduled but do not launch. Note You cannot manually enable interface-specific unsafe sysctls. OpenShift Container Platform adds the following system-wide and interface-specific safe sysctls to an allowed safe list: Table 7.4. System-wide safe sysctls sysctl Description kernel.shm_rmid_forced When set to 1 , all shared memory objects in current IPC namespace are automatically forced to use IPC_RMID. For more information, see shm_rmid_forced . net.ipv4.ip_local_port_range Defines the local port range that is used by TCP and UDP to choose the local port. The first number is the first port number, and the second number is the last local port number. If possible, it is better if these numbers have different parity (one even and one odd value). They must be greater than or equal to ip_unprivileged_port_start . The default values are 32768 and 60999 respectively. For more information, see ip_local_port_range . net.ipv4.tcp_syncookies When net.ipv4.tcp_syncookies is set, the kernel handles TCP SYN packets normally until the half-open connection queue is full, at which time, the SYN cookie functionality kicks in. This functionality allows the system to keep accepting valid connections, even if under a denial-of-service attack. For more information, see tcp_syncookies . net.ipv4.ping_group_range This restricts ICMP_PROTO datagram sockets to users in the group range. The default is 1 0 , meaning that nobody, not even root, can create ping sockets. For more information, see ping_group_range . net.ipv4.ip_unprivileged_port_start This defines the first unprivileged port in the network namespace. To disable all privileged ports, set this to 0 . Privileged ports must not overlap with the ip_local_port_range . For more information, see ip_unprivileged_port_start . Table 7.5. Interface-specific safe sysctls sysctl Description net.ipv4.conf.IFNAME.accept_redirects Accept IPv4 ICMP redirect messages. net.ipv4.conf.IFNAME.accept_source_route Accept IPv4 packets with strict source route (SRR) option. net.ipv4.conf.IFNAME.arp_accept Define behavior for gratuitous ARP frames with an IPv4 address that is not already present in the ARP table: 0 - Do not create new entries in the ARP table. 1 - Create new entries in the ARP table. net.ipv4.conf.IFNAME.arp_notify Define mode for notification of IPv4 address and device changes. net.ipv4.conf.IFNAME.disable_policy Disable IPSEC policy (SPD) for this IPv4 interface. net.ipv4.conf.IFNAME.secure_redirects Accept ICMP redirect messages only to gateways listed in the interface's current gateway list. net.ipv4.conf.IFNAME.send_redirects Send redirects is enabled only if the node acts as a router. That is, a host should not send an ICMP redirect message. It is used by routers to notify the host about a better routing path that is available for a particular destination. net.ipv6.conf.IFNAME.accept_ra Accept IPv6 Router advertisements; autoconfigure using them. It also determines whether or not to transmit router solicitations. Router solicitations are transmitted only if the functional setting is to accept router advertisements. net.ipv6.conf.IFNAME.accept_redirects Accept IPv6 ICMP redirect messages. net.ipv6.conf.IFNAME.accept_source_route Accept IPv6 packets with SRR option. net.ipv6.conf.IFNAME.arp_accept Define behavior for gratuitous ARP frames with an IPv6 address that is not already present in the ARP table: 0 - Do not create new entries in the ARP table. 1 - Create new entries in the ARP table. net.ipv6.conf.IFNAME.arp_notify Define mode for notification of IPv6 address and device changes. net.ipv6.neigh.IFNAME.base_reachable_time_ms This parameter controls the hardware address to IP mapping lifetime in the neighbour table for IPv6. net.ipv6.neigh.IFNAME.retrans_time_ms Set the retransmit timer for neighbor discovery messages. Note When setting these values using the tuning CNI plugin, use the value IFNAME literally. The interface name is represented by the IFNAME token, and is replaced with the actual name of the interface at runtime. 7.9.4. Updating the interface-specific safe sysctls list OpenShift Container Platform includes a predefined list of safe interface-specific sysctls . You can modify this list by updating the cni-sysctl-allowlist in the openshift-multus namespace. Important The support for updating the interface-specific safe sysctls list is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Follow this procedure to modify the predefined list of safe sysctls . This procedure describes how to extend the default allow list. Procedure View the existing predefined list by running the following command: USD oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml Expected output apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.12.0-0.nightly-2022-11-16-003434 creationTimestamp: "2022-11-17T14:09:27Z" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: "2422" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3 Edit the list by using the following command: USD oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml For example, to allow you to be able to implement stricter reverse path forwarding you need to add ^net.ipv4.conf.IFNAME.rp_filterUSD and ^net.ipv6.conf.IFNAME.rp_filterUSD to the list as shown here: # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD Save the changes to the file and exit. Note The removal of sysctls is also supported. Edit the file, remove the sysctl or sysctls then save the changes and exit. Verification Follow this procedure to enforce stricter reverse path forwarding for IPv4. For more information on reverse path forwarding see Reverse Path Forwarding . Create a network attachment definition, such as reverse-path-fwd-example.yaml , with the following content: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ "cniVersion": "0.4.0", "name": "tuningnad", "plugins": [{ "type": "bridge" }, { "type": "tuning", "sysctl": { "net.ipv4.conf.IFNAME.rp_filter": "1" } } ] }' Apply the yaml by running the following command: USD oc apply -f reverse-path-fwd-example.yaml Example output networkattachmentdefinition.k8.cni.cncf.io/tuningnad created Create a pod such as examplepod.yaml using the following YAML: apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL 1 Specify the name of the configured NetworkAttachmentDefinition . Apply the yaml by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh example Verify the value of the configured sysctl flag. For example, find the value net.ipv4.conf.net1.rp_filter by running the following command: sh-4.4# sysctl net.ipv4.conf.net1.rp_filter Expected output net.ipv4.conf.net1.rp_filter = 1 Additional resources Configuring the tuning CNI Linux networking documentation 7.9.5. Starting a pod with safe sysctls You can set sysctls on pods using the pod's securityContext . The securityContext applies to all containers in the same pod. Safe sysctls are allowed by default. This example uses the pod securityContext to set the following safe sysctls: kernel.shm_rmid_forced net.ipv4.ip_local_port_range net.ipv4.tcp_syncookies net.ipv4.ping_group_range Warning To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects. Use this procedure to start a pod with the configured sysctl settings. Note In most cases you modify an existing pod definition and add the securityContext spec. Procedure Create a YAML file sysctl_pod.yaml that defines an example pod and add the securityContext spec, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: ["ALL"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "1" - name: net.ipv4.ip_local_port_range value: "32770 60666" - name: net.ipv4.tcp_syncookies value: "0" - name: net.ipv4.ping_group_range value: "0 200000000" 1 runAsUser controls which user ID the container is run with. 2 runAsGroup controls which primary group ID the containers is run with. 3 allowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether the no_new_privs flag gets set on the container process. 4 capabilities permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod. 5 runAsNonRoot: true requires that the container will run with a user with any UID other than 0. 6 RuntimeDefault enables the default seccomp profile for a pod or container workload. Create the pod by running the following command: USD oc apply -f sysctl_pod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s Log in to the pod by running the following command: USD oc rsh sysctl-example Verify the values of the configured sysctl flags. For example, find the value kernel.shm_rmid_forced by running the following command: sh-4.4# sysctl kernel.shm_rmid_forced Expected output kernel.shm_rmid_forced = 1 7.9.6. Starting a pod with unsafe sysctls A pod with unsafe sysctls fails to launch on any node unless the cluster administrator explicitly enables unsafe sysctls for that node. As with node-level sysctls, use the taints and toleration feature or labels on nodes to schedule those pods onto the right nodes. The following example uses the pod securityContext to set a safe sysctl kernel.shm_rmid_forced and two unsafe sysctls, net.core.somaxconn and kernel.msgmax . There is no distinction between safe and unsafe sysctls in the specification. Warning To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects. The following example illustrates what happens when you add safe and unsafe sysctls to a pod specification: Procedure Create a YAML file sysctl-example-unsafe.yaml that defines an example pod and add the securityContext specification, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536" Create the pod using the following command: USD oc apply -f sysctl-example-unsafe.yaml Verify that the pod is scheduled but does not deploy because unsafe sysctls are not allowed for the node using the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s 7.9.7. Enabling unsafe sysctls A cluster administrator can allow certain unsafe sysctls for very special situations such as high performance or real-time application tuning. If you want to use unsafe sysctls, a cluster administrator must enable them individually for a specific type of node. The sysctls must be namespaced. You can further control which sysctls are set in pods by specifying lists of sysctls or sysctl patterns in the allowedUnsafeSysctls field of the Security Context Constraints. The allowedUnsafeSysctls option controls specific needs such as high performance or real-time application tuning. Warning Due to their nature of being unsafe, the use of unsafe sysctls is at-your-own-risk and can lead to severe problems, such as improper behavior of containers, resource shortage, or breaking a node. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to decide how to label your machine config by running the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m Add a label to the machine config pool where the containers with the unsafe sysctls will run by running the following command: USD oc label machineconfigpool worker custom-kubelet=sysctl Create a YAML file set-sysctl-worker.yaml that defines a KubeletConfig custom resource (CR): apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - "kernel.msg*" - "net.core.somaxconn" 1 Specify the label from the machine config pool. 2 List the unsafe sysctls you want to allow. Create the object by running the following command: USD oc apply -f set-sysctl-worker.yaml Wait for the Machine Config Operator to generate the new rendered configuration and apply it to the machines by running the following command: USD oc get machineconfigpool worker -w After some minutes the UPDATING status changes from True to False: NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m Create a YAML file sysctl-example-safe-unsafe.yaml that defines an example pod and add the securityContext spec, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536" Create the pod by running the following command: USD oc apply -f sysctl-example-safe-unsafe.yaml Expected output Warning: would violate PodSecurity "restricted:latest": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s Log in to the pod by running the following command: USD oc rsh sysctl-example-safe-unsafe Verify the values of the configured sysctl flags. For example, find the value net.core.somaxconn by running the following command: sh-4.4# sysctl net.core.somaxconn Expected output net.core.somaxconn = 1024 The unsafe sysctl is now allowed and the value is set as defined in the securityContext spec of the updated pod specification. 7.9.8. Additional resources Setting interface-level network sysctls Using the Node Tuning Operator | [
"USD(nproc) X 1/2 MiB",
"for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1",
"curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'",
"apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] - name: init-mydb image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;']",
"oc create -f myapp.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f myservice.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377",
"oc create -f mydb.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m",
"oc set volume <object_selection> <operation> <mandatory_parameters> <options>",
"oc set volume <object_type>/<name> [options]",
"oc set volume pod/p1",
"oc set volume dc --all --name=v1",
"oc set volume <object_type>/<name> --add [options]",
"oc set volume dc/registry --add",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP",
"oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'",
"oc set volume <object_type>/<name> --add --overwrite [options]",
"oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data",
"oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt",
"oc set volume <object_type>/<name> --remove [options]",
"oc set volume dc/d1 --remove --name=v1",
"oc set volume dc/d1 --remove --name=v1 --containers=c1",
"oc set volume rc/r1 --remove --confirm",
"oc rsh <pod>",
"sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3",
"apiVersion: v1 kind: Pod metadata: name: my-site spec: containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data",
"echo -n \"admin\" | base64",
"YWRtaW4=",
"echo -n \"1f2d1e2e67df\" | base64",
"MWYyZDFlMmU2N2Rm",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=",
"oc create -f <secrets-filename>",
"oc create -f secret.yaml",
"secret \"mysecret\" created",
"oc get secret <secret-name>",
"oc get secret mysecret",
"NAME TYPE DATA AGE mysecret Opaque 2 17h",
"oc get secret <secret-name> -o yaml",
"oc get secret mysecret -o yaml",
"apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque",
"kind: Pod metadata: name: test-projected-volume spec: containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1",
"oc create -f <your_yaml_file>.yaml",
"oc create -f secret-pod.yaml",
"pod \"test-projected-volume\" created",
"oc get pod <name>",
"oc get pod test-projected-volume",
"NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s",
"oc exec -it <pod> <command>",
"oc exec -it test-projected-volume -- /bin/sh",
"/ # ls",
"bin home root tmp dev proc run usr etc projected-volume sys var",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never",
"oc create -f volume-pod.yaml",
"oc logs -p dapi-volume-test-pod",
"cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory",
"oc create -f pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory",
"oc create -f volume-pod.yaml",
"apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth",
"oc create -f secret.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue",
"oc create -f configmap.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey restartPolicy: Always",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"oc rsync <source> <destination> [-c <container>]",
"<pod name>:<dir>",
"oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>",
"oc rsync /home/user/source devpod1234:/src -c user-container",
"oc rsync devpod1234:/src /home/user/source",
"oc rsync devpod1234:/src/status.txt /home/user/",
"rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"export RSYNC_RSH='oc rsh'",
"rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]",
"oc exec mypod date",
"Thu Apr 9 02:21:53 UTC 2015",
"/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>",
"/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> 5000 6000",
"Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000",
"oc port-forward <pod> 8888:5000",
"Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000",
"oc port-forward <pod> :5000",
"Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000",
"oc port-forward <pod> 0:5000",
"/proxy/nodes/<node_name>/portForward/<namespace>/<pod>",
"/proxy/nodes/node123.openshift.com/portForward/myns/mypod",
"sudo sysctl -a",
"oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml",
"apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.12.0-0.nightly-2022-11-16-003434 creationTimestamp: \"2022-11-17T14:09:27Z\" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: \"2422\" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3",
"oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml",
"Please edit the object below. Lines beginning with a '#' will be ignored, and an empty file will abort the edit. If an error occurs while saving this file will be reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.rp_filter\": \"1\" } } ] }'",
"oc apply -f reverse-path-fwd-example.yaml",
"networkattachmentdefinition.k8.cni.cncf.io/tuningnad created",
"apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s",
"oc rsh example",
"sh-4.4# sysctl net.ipv4.conf.net1.rp_filter",
"net.ipv4.conf.net1.rp_filter = 1",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: [\"ALL\"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"1\" - name: net.ipv4.ip_local_port_range value: \"32770 60666\" - name: net.ipv4.tcp_syncookies value: \"0\" - name: net.ipv4.ping_group_range value: \"0 200000000\"",
"oc apply -f sysctl_pod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s",
"oc rsh sysctl-example",
"sh-4.4# sysctl kernel.shm_rmid_forced",
"kernel.shm_rmid_forced = 1",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"",
"oc apply -f sysctl-example-unsafe.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m",
"oc label machineconfigpool worker custom-kubelet=sysctl",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"",
"oc apply -f set-sysctl-worker.yaml",
"oc get machineconfigpool worker -w",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"",
"oc apply -f sysctl-example-safe-unsafe.yaml",
"Warning: would violate PodSecurity \"restricted:latest\": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s",
"oc rsh sysctl-example-safe-unsafe",
"sh-4.4# sysctl net.core.somaxconn",
"net.core.somaxconn = 1024"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/nodes/working-with-containers |
Chapter 1. Getting started with the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa | Chapter 1. Getting started with the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa Setup and basic usage of the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . 1.1. About the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa Use the rosa command-line utility for Red Hat OpenShift Service on AWS (ROSA) to create, update, manage, and delete Red Hat OpenShift Service on AWS clusters and resources. 1.2. Setting up the ROSA CLI Use the following steps to install and configure the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your installation host. Procedure Download the latest version of the ROSA CLI ( rosa ) for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the rosa binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive: USD tar xvf rosa-linux.tar.gz Add rosa to your path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv rosa /usr/local/bin/rosa Verify if the ROSA CLI is installed correctly by querying the rosa version: USD rosa version Example output 1.2.15 Your ROSA CLI is up to date. Optional: Enable tab completion for the ROSA CLI. With tab completion enabled, you can press the Tab key twice to automatically complete subcommands and receive command suggestions: To enable persistent tab completion for Bash on a Linux host: Generate a rosa tab completion configuration file for Bash and save it to your /etc/bash_completion.d/ directory: # rosa completion bash > /etc/bash_completion.d/rosa Open a new terminal to activate the configuration. To enable persistent tab completion for Bash on a macOS host: Generate a rosa tab completion configuration file for Bash and save it to your /usr/local/etc/bash_completion.d/ directory: USD rosa completion bash > /usr/local/etc/bash_completion.d/rosa Open a new terminal to activate the configuration. To enable persistent tab completion for Zsh: If tab completion is not enabled for your Zsh environment, enable it by running the following command: USD echo "autoload -U compinit; compinit" >> ~/.zshrc Generate a rosa tab completion configuration file for Zsh and save it to the first directory in your functions path: USD rosa completion zsh > "USD{fpath[1]}/_rosa" Open a new terminal to activate the configuration. To enable persistent tab completion for fish: Generate a rosa tab completion configuration file for fish and save it to your ~/.config/fish/completions/ directory: USD rosa completion fish > ~/.config/fish/completions/rosa.fish Open a new terminal to activate the configuration. To enable persistent tab completion for PowerShell: Generate a rosa tab completion configuration file for PowerShell and save it to a file named rosa.ps1 : PS> rosa completion powershell | Out-String | Invoke-Expression Source the rosa.ps1 file from your PowerShell profile. Note For more information about configuring rosa tab completion, see the help menu by running the rosa completion --help command. 1.3. Configuring the ROSA CLI Use the following commands to configure the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . 1.3.1. login Log in to your Red Hat account, saving the credentials to the rosa configuration file. You must provide a token when logging in. You can copy your token from the Red Hat OpenShift Service on AWS token page . The ROSA CLI ( rosa ) looks for a token in the following priority order: Command-line arguments The ROSA_TOKEN environment variable The rosa configuration file Interactively from a command-line prompt Syntax USD rosa login [arguments] Table 1.1. Arguments Option Definition --client-id The OpenID client identifier (string). Default: cloud-services --client-secret The OpenID client secret (string). --insecure Enables insecure communication with the server. This disables verification of TLS certificates and host names. --scope The OpenID scope (string). If this option is used, it replaces the default scopes. This can be repeated multiple times to specify multiple scopes. Default: openid --token Accesses or refreshes the token (string). --token-url The OpenID token URL (string). Default: https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token Table 1.2. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 1.3.2. logout Log out of rosa . Logging out also removes the rosa configuration file. Syntax USD rosa logout [arguments] Table 1.3. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 1.3.3. verify permissions Verify that the AWS permissions required to create a ROSA cluster are configured correctly: Syntax USD rosa verify permissions [arguments] Note This command verifies permissions only for clusters that do not use the AWS Security Token Service (STS). Table 1.4. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --region The AWS region (string) in which to run the command. This value overrides the AWS_REGION environment variable. --profile Specifies an AWS profile (string) from your credentials file. Examples Verify that the AWS permissions are configured correctly: USD rosa verify permissions Verify that the AWS permissions are configured correctly in a specific region: USD rosa verify permissions --region=us-west-2 1.3.4. verify quota Verifies that AWS quotas are configured correctly for your default region. Syntax USD rosa verify quota [arguments] Table 1.5. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --region The AWS region (string) in which to run the command. This value overrides the AWS_REGION environment variable. --profile Specifies an AWS profile (string) from your credentials file. Examples Verify that the AWS quotas are configured correctly for the default region: USD rosa verify quota Verify that the AWS quotas are configured correctly in a specific region: USD rosa verify quota --region=us-west-2 1.3.5. download rosa Download the latest compatible version of the ROSA CLI. After you download rosa , extract the contents of the archive and add it to your path. See Setting up the ROSA CLI for more details. Syntax USD rosa download rosa [arguments] Table 1.6. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. 1.3.6. download oc Download the latest compatible version of the OpenShift Container Platform CLI ( oc ). After you download oc , you must extract the contents of the archive and add it to your path. Syntax USD rosa download oc [arguments] Table 1.7. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. Example Download oc client tools: USD rosa download oc 1.3.7. verify oc Verifies that the OpenShift Container Platform CLI ( oc ) is installed correctly. Syntax USD rosa verify oc [arguments] Table 1.8. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. Example Verify oc client tools: USD rosa verify oc 1.4. Initializing Red Hat OpenShift Service on AWS Use the init command to initialize Red Hat OpenShift Service on AWS (ROSA) only if you are using non-STS. 1.4.1. init Perform a series of checks to verify that you are ready to deploy an Red Hat OpenShift Service on AWS cluster. The list of checks includes the following: Checks to see that you have logged in (see login ) Checks that your AWS credentials are valid Checks that your AWS permissions are valid (see verify permissions ) Checks that your AWS quota levels are high enough (see verify quota ) Runs a cluster simulation to ensure cluster creation will perform as expected Checks that the osdCcsAdmin user has been created in your AWS account Checks that the OpenShift Container Platform command-line tool is available on your system Syntax USD rosa init [arguments] Table 1.9. Arguments Option Definition --region The AWS region (string) in which to verify quota and permissions. This value overrides the AWS_REGION environment variable only when running the init command, but it does not change your AWS CLI configuration. --delete Deletes the stack template that is applied to your AWS account during the init command. --client-id The OpenID client identifier (string). Default: cloud-services --client-secret The OpenID client secret (string). --insecure Enables insecure communication with the server. This disables verification of TLS certificates and host names. --scope The OpenID scope (string). If this option is used, it completely replaces the default scopes. This can be repeated multiple times to specify multiple scopes. Default: openid --token Accesses or refreshes the token (string). --token-url The OpenID token URL (string). Default: https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token Table 1.10. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Configure your AWS account to allow ROSA clusters: USD rosa init Configure a new AWS account using pre-existing OpenShift Cluster Manager credentials: USD rosa init --token=USDOFFLINE_ACCESS_TOKEN 1.5. Using a Bash script This is an example workflow of how to use a Bash script with the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Prerequisites Make sure that AWS credentials are available as one of the following options: AWS profile Environment variables ( AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY ) Procedure Initialize rosa using an Red Hat OpenShift Cluster Manager offline token from Red Hat : USD rosa init --token=<token> Create the Red Hat OpenShift Service on AWS (ROSA) cluster: USD rosa create cluster --cluster-name=<cluster_name> Add an identity provider (IDP): USD rosa create idp --cluster=<cluster_name> --type=<identity_provider> [arguments] Add a dedicated-admin user: USD rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> 1.6. Updating the ROSA CLI Update to the latest compatible version of the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Procedure Confirm that a new version of the ROSA CLI ( rosa ) is available: USD rosa version Example output 1.2.12 There is a newer release version '1.2.15', please consider updating: https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/ Download the latest compatible version of the ROSA CLI: USD rosa download rosa This command downloads an archive called rosa-*.tar.gz into the current directory. The exact name of the file depends on your operating system and system architecture. Extract the contents of the archive: USD tar -xzf rosa-linux.tar.gz Install the new version of the ROSA CLI by moving the extracted file into your path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv rosa /usr/local/bin/rosa Verification Verify that the new version of ROSA is installed. USD rosa version Example output 1.2.15 Your ROSA CLI is up to date. | [
"tar xvf rosa-linux.tar.gz",
"sudo mv rosa /usr/local/bin/rosa",
"rosa version",
"1.2.15 Your ROSA CLI is up to date.",
"rosa completion bash > /etc/bash_completion.d/rosa",
"rosa completion bash > /usr/local/etc/bash_completion.d/rosa",
"echo \"autoload -U compinit; compinit\" >> ~/.zshrc",
"rosa completion zsh > \"USD{fpath[1]}/_rosa\"",
"rosa completion fish > ~/.config/fish/completions/rosa.fish",
"PS> rosa completion powershell | Out-String | Invoke-Expression",
"rosa login [arguments]",
"rosa logout [arguments]",
"rosa verify permissions [arguments]",
"rosa verify permissions",
"rosa verify permissions --region=us-west-2",
"rosa verify quota [arguments]",
"rosa verify quota",
"rosa verify quota --region=us-west-2",
"rosa download rosa [arguments]",
"rosa download oc [arguments]",
"rosa download oc",
"rosa verify oc [arguments]",
"rosa verify oc",
"rosa init [arguments]",
"rosa init",
"rosa init --token=USDOFFLINE_ACCESS_TOKEN",
"rosa init --token=<token>",
"rosa create cluster --cluster-name=<cluster_name>",
"rosa create idp --cluster=<cluster_name> --type=<identity_provider> [arguments]",
"rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"rosa version",
"1.2.12 There is a newer release version '1.2.15', please consider updating: https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/",
"rosa download rosa",
"tar -xzf rosa-linux.tar.gz",
"sudo mv rosa /usr/local/bin/rosa",
"rosa version",
"1.2.15 Your ROSA CLI is up to date."
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/rosa_cli/rosa-get-started-cli |
27.5. Enabling Console Access for Other Applications | 27.5. Enabling Console Access for Other Applications To make other applications accessible to console users, a bit more work is required. First of all, console access only works for applications which reside in /sbin/ or /usr/sbin/ , so the application that you wish to run must be there. After verifying that, do the following steps: Create a link from the name of your application, such as our sample foo program, to the /usr/bin/consolehelper application: Create the file /etc/security/console.apps/ foo : Create a PAM configuration file for the foo service in /etc/pam.d/ . An easy way to do this is to start with a copy of the halt service's PAM configuration file, and then modify the file if you want to change the behavior: Now, when /usr/bin/ foo is executed, consolehelper is called, which authenticates the user with the help of /usr/sbin/userhelper . To authenticate the user, consolehelper asks for the user's password if /etc/pam.d/ foo is a copy of /etc/pam.d/halt (otherwise, it does precisely what is specified in /etc/pam.d/ foo ) and then runs /usr/sbin/ foo with root permissions. In the PAM configuration file, an application can be configured to use the pam_timestamp module to remember (or cache) a successful authentication attempt. When an application is started and proper authentication is provided (the root password), a timestamp file is created. By default, a successful authentication is cached for five minutes. During this time, any other application that is configured to use pam_timestamp and run from the same session is automatically authenticated for the user - the user does not have to enter the root password again. This module is included in the pam package. To enable this feature, the PAM configuration file in etc/pam.d/ must include the following lines: The first line that begins with auth should be after any other auth sufficient lines, and the line that begins with session should be after any other session optional lines. If an application configured to use pam_timestamp is successfully authenticated from the Main Menu Button (on the Panel), the icon is displayed in the notification area of the panel if you are running the GNOME or KDE desktop environment. After the authentication expires (the default is five minutes), the icon disappears. The user can select to forget the cached authentication by clicking on the icon and selecting the option to forget authentication. | [
"cd /usr/bin ln -s consolehelper foo",
"touch /etc/security/console.apps/ foo",
"cp /etc/pam.d/halt /etc/pam.d/foo",
"auth sufficient /lib/security/pam_timestamp.so session optional /lib/security/pam_timestamp.so"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/console_access-enabling_console_access_for_other_applications |
Chapter 2. Red Hat Ansible Automation Platform Architecture | Chapter 2. Red Hat Ansible Automation Platform Architecture As a modular platform, Ansible Automation Platform provides the flexibility to easily integrate components and customize your deployment to best meet your automation requirements. The following section provides a comprehensive architectural example of an Ansible Automation Platform deployment. 2.1. Example Ansible Automation Platform architecture The Red Hat Ansible Automation Platform 2.4 reference architecture provides an example setup of a standard deployment of Ansible Automation Platform using automation mesh on Red Hat Enterprise Linux. The deployment shown takes advantage of the following components to provide a simple, secure and flexible method of handling your automation workloads, a central location for content collections, and automated resolution of IT requests. Automation controller Provides the control plane for automation through its UI, Restful API, RBAC workflows and CI/CD integrations. Automation mesh Is an overlay network that provides the ability to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks. Private automation hub Provides automation developers the ability to collaborate and publish their own automation content and streamline delivery of Ansible code within their organization. Event-Driven Ansible Provides the event-handling capability needed to automate time-consuming tasks and respond to changing conditions in any IT domain. The architecture for this example consists of the following: A two node automation controller cluster An optional hop node to connect automation controller to execution nodes A two node automation hub cluster A single node Event-Driven Ansible controller cluster A single PostgreSQL database connected to the automation controller, automation hub, and Event-Driven Ansible controller clusters Two execution nodes per automation controller cluster Figure 2.1. Example Ansible Automation Platform 2.4 architecture | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_planning_guide/aap_architecture |
Chapter 13. Installing a three-node cluster on GCP | Chapter 13. Installing a three-node cluster on GCP In OpenShift Container Platform version 4.16, you can install a three-node cluster on Google Cloud Platform (GCP). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 13.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 13.2. steps Installing a cluster on GCP with customizations Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_gcp/installing-gcp-three-node |
Chapter 4. Fencing Controller nodes with STONITH | Chapter 4. Fencing Controller nodes with STONITH Fencing is the process of isolating a failed node to protect the cluster and the cluster resources. Without fencing, a failed node might result in data corruption in a cluster. Director uses Pacemaker to provide a highly available cluster of Controller nodes. Pacemaker uses a process called STONITH to fence failed nodes. STONITH is an acronym for "Shoot the other node in the head". STONITH is disabled by default and requires manual configuration so that Pacemaker can control the power management of each node in the cluster. If a Controller node fails a health check, the Controller node that acts as the Pacemaker designated coordinator (DC) uses the Pacemaker stonith service to fence the impacted Controller node. Important Deploying a highly available overcloud without STONITH is not supported. You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters . 4.1. Supported fencing agents When you deploy a high availability environment with fencing, you can choose the fencing agents based on your environment needs. To change the fencing agent, you must configure additional parameters in the fencing.yaml file. Red Hat OpenStack Platform (RHOSP) supports the following fencing agents: Intelligent Platform Management Interface (IPMI) Default fencing mechanism that Red Hat OpenStack Platform (RHOSP) uses to manage fencing. STONITH Block Device (SBD) The SBD (Storage-Based Death) daemon integrates with Pacemaker and a watchdog device to arrange for nodes to reliably shut down when fencing is triggered and in cases where traditional fencing mechanisms are not available. Important SBD fencing is not supported in clusters with remote bare metal or virtual machine nodes that use pacemaker_remote , so it is not supported if your deployment uses Instance HA. fence_sbd and sbd poison-pill fencing with block storage devices are not supported. SBD fencing is only supported with compatible watchdog devices. For more information, see Support Policies for RHEL High Availability Clusters - sbd and fence_sbd . fence_kdump Use in deployments with the kdump crash recovery service. If you choose this agent, ensure that you have enough disk space to store the dump files. You can configure this agent as a secondary mechanism in addition to the IPMI, fence_rhevm , or Redfish fencing agents. If you configure multiple fencing agents, make sure that you allocate enough time for the first agent to complete the task before the second agent starts the task. Important RHOSP director supports only the configuration of the fence_kdump STONITH agent, and not the configuration of the full kdump service that the fencing agent depends on. For information about configuring the kdump service, see the article How do I configure fence_kdump in a Red Hat Pacemaker cluster . fence_kdump is not supported if the Pacemaker network traffic interface uses the ovs_bridges or ovs_bonds network device. To enable fence_kdump , you must change the network device to linux_bond or linux_bridge . For more information about network interface configuration, see Network interface reference . Redfish Use in deployments with servers that support the DMTF Redfish APIs. To specify this agent, change the value of the agent parameter to fence_redfish in the fencing.yaml file. For more information about Redfish, see the DTMF Documentation . fence_rhevm for Red Hat Virtualization (RHV) Use to configure fencing for Controller nodes that run in RHV environments. You can generate the fencing.yaml file in the same way as you do for IPMI fencing, but you must define the pm_type parameter in the nodes.json file to use RHV. By default, the ssl_insecure parameter is set to accept self-signed certificates. You can change the parameter value based on your security requirements. Important Ensure that you use a role that has permissions to create and launch virtual machines in RHV, such as UserVMManager . Multi-layered fencing You can configure multiple fencing agents to support complex fencing use cases. For example, you can configure IPMI fencing together with fence_kdump . The order of the fencing agents determines the order in which Pacemaker triggers each mechanism. Additional resources Section 4.2, "Deploying fencing on the overcloud" Section 4.3, "Testing fencing on the overcloud" Section 4.5, "Fencing parameters" 4.2. Deploying fencing on the overcloud To deploy fencing on the overcloud, first review the state of STONITH and Pacemaker and configure the fencing.yaml file. Then, deploy the overcloud and configure additional parameters. Finally, test that fencing is deployed correctly on the overcloud. Prerequisites Choose the correct fencing agent for your deployment. For the list of supported fencing agents, see Section 4.1, "Supported fencing agents" . Ensure that you can access the nodes.json file that you created when you registered your nodes in director. This file is a required input for the fencing.yaml file that you generate during deployment. The nodes.json file must contain the MAC address of one of the network interfaces (NICs) on the node. For more information, see Registering Nodes for the Overcloud . If you use the Red Hat Virtualization (RHV) fencing agent, use a role that has permissions to manage virtual machines, such as UserVMManager . Procedure Log in to each Controller node as the tripleo-admin user. Verify that the cluster is running: Example output: Verify that STONITH is disabled: Example output: Depending on the fencing agent that you want to use, choose one of the following options: If you use the IPMI or RHV fencing agent, generate the fencing.yaml environment file: Note This command converts ilo and drac power management details to IPMI equivalents. If you use a different fencing agent, such as STONITH Block Device (SBD), fence_kdump , or Redfish, or if you use pre-provisioned nodes, create the fencing.yaml file manually. SBD fencing only: Add the following parameter to the fencing.yaml file: Note This step is applicable to initial overcloud deployments only. For more information about how to enable SBD fencing on an existing overcloud, see Enabling sbd fencing in RHEL 7 and 8 . Multi-layered fencing only: Add the level-specific parameters to the generated fencing.yaml file: Replace <parameter> and <value> with the actual parameters and values that the fencing agent requires. Run the overcloud deploy command and include the fencing.yaml file and any other environment files that are relevant for your deployment: SBD fencing only: Set the watchdog timer device interval and check that the interval is set correctly. Verification Log in to the overcloud as the heat-admin user and ensure that Pacemaker is configured as the resource manager: In this example, Pacemaker is configured to use a STONITH resource for each of the Controller nodes that are specified in the fencing.yaml file. Note You must not configure the fence-resource process on the same node that it controls. Check the fencing resource attributes. The STONITH attribute values must match the values in the fencing.yaml file: Additional Resources Section 4.3, "Testing fencing on the overcloud" Section 4.5, "Fencing parameters" Exploring RHEL High Availability's Components - sbd and fence_sbd 4.3. Testing fencing on the overcloud To test that fencing works correctly, trigger fencing by closing all ports on a Controller node and restarting the server. Important This procedure deliberately drops all connections to the Controller node, which causes the node to restart. Prerequisites Fencing is deployed and running on the overcloud. For information on how to deploy fencing, see Section 4.2, "Deploying fencing on the overcloud" . Controller node is available for a restart. Procedure Log in to a Controller node as the stack user and source the credentials file: Change to the root user and close all connections to the Controller node: From a different Controller node, locate the fencing event in the Pacemaker log file: If the STONITH service performed the fencing action on the Controller, the log file shows a fencing event. Wait a few minutes and then verify that the restarted Controller node is running in the cluster again by running the pcs status command. If you can see the Controller node that you restarted in the output, fencing functions correctly. 4.4. Viewing STONITH device information To see how STONITH configures your fencing devices, run the pcs stonith show --full command from the overcloud. Prerequisites Fencing is deployed and running on the overcloud. For information on how to deploy fencing, see Section 4.2, "Deploying fencing on the overcloud" . Procedure Show the list of Controller nodes and the status of their STONITH devices: This output shows the following information for each resource: IPMI power management service that the fencing device uses to turn the machines on and off as needed, such as fence_ipmilan . IP address of the IPMI interface, such as 10.100.0.51 . User name to log in with, such as admin . Password to use to log in to the node, such as abc . Interval in seconds at which each host is monitored, such as 60s . 4.5. Fencing parameters When you deploy fencing on the overcloud, you generate the fencing.yaml file with the required parameters to configure fencing. The following example shows the structure of the fencing.yaml environment file: This file contains the following parameters: EnableFencing Enables the fencing functionality for Pacemaker-managed nodes. FencingConfig Lists the fencing devices and the parameters for each device: agent : Fencing agent name. host_mac : The mac address in lowercase of the provisioning interface or any other network interface on the server. You can use this as a unique identifier for the fencing device. Important Do not use the MAC address of the IPMI interface. params : List of fencing device parameters. Fencing device parameters Lists the fencing device parameters. This example shows the parameters for the IPMI fencing agent: auth : IPMI authentication type ( md5 , password , or none). ipaddr : IPMI IP address. ipport : IPMI port. login : Username for the IPMI device. passwd : Password for the IPMI device. lanplus : Use lanplus to improve security of connection. privlvl : Privilege level on IPMI device pcmk_host_list : List of Pacemaker hosts. Additional resources Section 4.2, "Deploying fencing on the overcloud" Section 4.1, "Supported fencing agents" 4.6. Additional resources "Configuring fencing in a Red Hat High Availability cluster" | [
"sudo pcs status",
"Cluster name: openstackHA Last updated: Wed Jun 24 12:40:27 2015 Last change: Wed Jun 24 11:36:18 2015 Stack: corosync Current DC: lb-c1a2 (2) - partition with quorum Version: 1.1.12-a14efad 3 Nodes configured 141 Resources configured",
"sudo pcs property show",
"Cluster Properties: cluster-infrastructure: corosync cluster-name: openstackHA dc-version: 1.1.12-a14efad have-watchdog: false stonith-enabled: false",
"(undercloud) USD openstack overcloud generate fencing --output fencing.yaml nodes.json",
"parameter_defaults: ExtraConfig: pacemaker::corosync::enable_sbd: true",
"parameter_defaults: EnableFencing: true FencingConfig: devices: level1: - agent: [VALUE] host_mac: aa:bb:cc:dd:ee:ff params: <parameter>: <value> level2: - agent: fence_agent2 host_mac: aa:bb:cc:dd:ee:ff params: <parameter>: <value>",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan -e fencing.yaml",
"pcs property set stonith-watchdog-timeout=<interval> pcs property show",
"source stackrc openstack server list | grep controller ssh tripleo-admin@<controller-x_ip> sudo pcs status | grep fence stonith-overcloud-controller-x (stonith:fence_ipmilan): Started overcloud-controller-y",
"sudo pcs stonith show <stonith-resource-controller-x>",
"source stackrc openstack server list | grep controller ssh tripleo-admin@<controller-x_ip>",
"sudo -i iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT && iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT && iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5016 -j ACCEPT && iptables -A INPUT -p udp -m state --state NEW -m udp --dport 5016 -j ACCEPT && iptables -A INPUT ! -i lo -j REJECT --reject-with icmp-host-prohibited && iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT && iptables -A OUTPUT -p tcp --sport 5016 -j ACCEPT && iptables -A OUTPUT -p udp --sport 5016 -j ACCEPT && iptables -A OUTPUT ! -o lo -j REJECT --reject-with icmp-host-prohibited",
"ssh tripleo-admin@<controller-x_ip> less /var/log/cluster/corosync.log (less): /fenc*",
"sudo pcs stonith show --full Resource: my-ipmilan-for-controller-0 (class=stonith type=fence_ipmilan) Attributes: pcmk_host_list=overcloud-controller-0 ipaddr=10.100.0.51 login=admin passwd=abc lanplus=1 cipher=3 Operations: monitor interval=60s (my-ipmilan-for-controller-0-monitor-interval-60s) Resource: my-ipmilan-for-controller-1 (class=stonith type=fence_ipmilan) Attributes: pcmk_host_list=overcloud-controller-1 ipaddr= 10.100.0.52 login=admin passwd=abc lanplus=1 cipher=3 Operations: monitor interval=60s (my-ipmilan-for-controller-1-monitor-interval-60s) Resource: my-ipmilan-for-controller-2 (class=stonith type=fence_ipmilan) Attributes: pcmk_host_list=overcloud-controller-2 ipaddr= 10.100.0.53 login=admin passwd=abc lanplus=1 cipher=3 Operations: monitor interval=60s (my-ipmilan-for-controller-2-monitor-interval-60s)",
"parameter_defaults: EnableFencing: true FencingConfig: devices: - agent: fence_ipmilan host_mac: 11:11:11:11:11:11 params: ipaddr: 10.0.0.101 lanplus: true login: admin passwd: InsertComplexPasswordHere pcmk_host_list: host04 privlvl: administrator"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/high_availability_deployment_and_usage/assembly_fencing-controller-nodes_rhosp |
Chapter 10. Container Images Based on Red Hat Software Collections 3.1 | Chapter 10. Container Images Based on Red Hat Software Collections 3.1 Component Description Supported architectures Application Images rhscl/php-70-rhel7 PHP 7.0 platform for building and running applications (EOL) x86_64 rhscl/perl-526-rhel7 Perl 5.26 platform for building and running applications (EOL) x86_64 Daemon Images rhscl/varnish-5-rhel7 Varnish Cache 5.0 HTTP reverse proxy (EOL) x86_64, s390x, ppc64le Database Images rhscl/mongodb-36-rhel7 MongoDB 3.6 NoSQL database server (EOL) x86_64 rhscl/postgresql-10-rhel7 PostgreSQL 10 SQL database server x86_64, s390x, ppc64le Red Hat Developer Toolset Images rhscl/devtoolset-7-toolchain-rhel7 Red Hat Developer Toolset toolchain (EOL) x86_64, s390x, ppc64le rhscl/devtoolset-7-perftools-rhel7 Red Hat Developer Toolset perftools (EOL) x86_64, s390x, ppc64le Legend: x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z ppc64le - IBM POWER, little endian All images are based on components from Red Hat Software Collections. The images are available for Red Hat Enterprise Linux 7 through the Red Hat Container Registry. For detailed information about components provided by Red Hat Software Collections 3.1, see the Red Hat Software Collections 3.1 Release Notes . For more information about the Red Hat Developer Toolset 7.1 components, see the Red Hat Developer Toolset 7 User Guide . For information regarding container images based on Red Hat Software Collections 2, see Using Red Hat Software Collections 2 Container Images . EOL images are no longer supported. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/rhscl_3.1_images |
7.7. Using Channel Bonding | 7.7. Using Channel Bonding To enhance performance, adjust available module options to ascertain what combination works best. Pay particular attention to the miimon or arp_interval and the arp_ip_target parameters. See Section 7.7.1, "Bonding Module Directives" for a list of available options and how to quickly determine the best ones for your bonded interface. 7.7.1. Bonding Module Directives It is a good idea to test which channel bonding module parameters work best for your bonded interfaces before adding them to the BONDING_OPTS=" bonding parameters " directive in your bonding interface configuration file ( ifcfg-bond0 for example). Parameters to bonded interfaces can be configured without unloading (and reloading) the bonding module by manipulating files in the sysfs file system. sysfs is a virtual file system that represents kernel objects as directories, files and symbolic links. sysfs can be used to query for information about kernel objects, and can also manipulate those objects through the use of normal file system commands. The sysfs virtual file system is mounted under the /sys/ directory. All bonding interfaces can be configured dynamically by interacting with and manipulating files under the /sys/class/net/ directory. In order to determine the best parameters for your bonding interface, create a channel bonding interface file such as ifcfg-bond0 by following the instructions in Section 7.4.2, "Create a Channel Bonding Interface" . Insert the SLAVE=yes and MASTER=bond0 directives in the configuration files for each interface bonded to bond0 . Once this is completed, you can proceed to testing the parameters. First, open the bond you created by running ifup bond N as root : If you have correctly created the ifcfg-bond0 bonding interface file, you will be able to see bond0 listed in the output of running ip link show as root : To view all existing bonds, even if they are not up, run: You can configure each bond individually by manipulating the files located in the /sys/class/net/bond N /bonding/ directory. First, the bond you are configuring must be taken down: As an example, to enable MII monitoring on bond0 with a 1 second interval, run as root : To configure bond0 for balance-alb mode, run either: ...or, using the name of the mode: After configuring options for the bond in question, you can bring it up and test it by running ifup bond N . If you decide to change the options, take the interface down, modify its parameters using sysfs , bring it back up, and re-test. Once you have determined the best set of parameters for your bond, add those parameters as a space-separated list to the BONDING_OPTS= directive of the /etc/sysconfig/network-scripts/ifcfg-bond N file for the bonding interface you are configuring. Whenever that bond is brought up (for example, by the system during the boot sequence if the ONBOOT=yes directive is set), the bonding options specified in the BONDING_OPTS will take effect for that bond. The following list provides the names of many of the more common channel bonding parameters, along with a description of what they do. For more information, see the brief descriptions for each parm in modinfo bonding output, or for more detailed information, see https://www.kernel.org/doc/Documentation/networking/bonding.txt . Bonding Interface Parameters ad_select= value Specifies the 802.3ad aggregation selection logic to use. Possible values are: stable or 0 - Default setting. The active aggregator is chosen by largest aggregate bandwidth. Reselection of the active aggregator occurs only when all ports of the active aggregator are down or if the active aggregator has no ports. bandwidth or 1 - The active aggregator is chosen by largest aggregate bandwidth. Reselection occurs if: A port is added to or removed from the bond; Any port's link state changes; Any port's 802.3ad association state changes; The bond's administrative state changes to up. count or 2 - The active aggregator is chosen by the largest number of ports. Reselection occurs as described for the bandwidth setting above. The bandwidth and count selection policies permit failover of 802.3ad aggregations when partial failure of the active aggregator occurs. This keeps the aggregator with the highest availability, either in bandwidth or in number of ports, active at all times. arp_interval= time_in_milliseconds Specifies, in milliseconds, how often ARP monitoring occurs. Important It is essential that both arp_interval and arp_ip_target parameters are specified, or, alternatively, the miimon parameter is specified. Failure to do so can cause degradation of network performance in the event that a link fails. If using this setting while in mode=0 or mode=2 (the two load-balancing modes), the network switch must be configured to distribute packets evenly across the NICs. For more information on how to accomplish this, see https://www.kernel.org/doc/Documentation/networking/bonding.txt . The value is set to 0 by default, which disables it. arp_ip_target= ip_address [ , ip_address_2 ,... ip_address_16 ] Specifies the target IP address of ARP requests when the arp_interval parameter is enabled. Up to 16 IP addresses can be specified in a comma separated list. arp_validate= value Validate source/distribution of ARP probes; default is none . Other valid values are active , backup , and all . downdelay= time_in_milliseconds Specifies (in milliseconds) how long to wait after link failure before disabling the link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. fail_over_mac= value Specifies whether active-backup mode should set all ports to the same MAC address at the point of assignment (the traditional behavior), or, when enabled, perform special handling of the bond's MAC address in accordance with the selected policy. Possible values are: none or 0 - Default setting. This setting disables fail_over_mac , and causes bonding to set all ports of an active-backup bond to the same MAC address at the point of assignment. active or 1 - The " active " fail_over_mac policy indicates that the MAC address of the bond should always be the MAC address of the currently active port. The MAC address of the ports is not changed; instead, the MAC address of the bond changes during a failover. This policy is useful for devices that cannot ever alter their MAC address, or for devices that refuse incoming broadcasts with their own source MAC (which interferes with the ARP monitor). The disadvantage of this policy is that every device on the network must be updated by gratuitous ARP, as opposed to the normal method of switches snooping incoming traffic to update their ARP tables. If the gratuitous ARP is lost, communication may be disrupted. When this policy is used in conjunction with the MII monitor, devices which assert link up prior to being able to actually transmit and receive are particularly susceptible to loss of the gratuitous ARP, and an appropriate updelay setting may be required. follow or 2 - The " follow " fail_over_mac policy causes the MAC address of the bond to be selected normally (normally the MAC address of the first port added to the bond). However, the second and subsequent ports are not set to this MAC address while they are in a backup role; a port is programmed with the bond's MAC address at failover time (and the formerly active port receives the newly active port's MAC address). This policy is useful for multiport devices that either become confused or incur a performance penalty when multiple ports are programmed with the same MAC address. lacp_rate= value Specifies the rate at which link partners should transmit LACPDU packets in 802.3ad mode. Possible values are: slow or 0 - Default setting. This specifies that partners should transmit LACPDUs every 30 seconds. fast or 1 - Specifies that partners should transmit LACPDUs every 1 second. miimon= time_in_milliseconds Specifies (in milliseconds) how often MII link monitoring occurs. This is useful if high availability is required because MII is used to verify that the NIC is active. To verify that the driver for a particular NIC supports the MII tool, type the following command as root: In this command, replace interface_name with the name of the device interface, such as enp1s0 , not the bond interface. If MII is supported, the command returns: If using a bonded interface for high availability, the module for each NIC must support MII. Setting the value to 0 (the default), turns this feature off. When configuring this setting, a good starting point for this parameter is 100 . Important It is essential that both arp_interval and arp_ip_target parameters are specified, or, alternatively, the miimon parameter is specified. Failure to do so can cause degradation of network performance in the event that a link fails. mode= value Allows you to specify the bonding policy. The value can be one of: balance-rr or 0 - Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded port interface beginning with the first one available. active-backup or 1 - Sets an active-backup policy for fault tolerance. Transmissions are received and sent out through the first available bonded port interface. Another bonded port interface is only used if the active bonded port interface fails. balance-xor or 2 - Transmissions are based on the selected hash policy. The default is to derive a hash by XOR of the source and destination MAC addresses multiplied by the modulo of the number of port interfaces. In this mode traffic destined for specific peers will always be sent over the same interface. As the destination is determined by the MAC addresses this method works best for traffic to peers on the same link or local network. If traffic has to pass through a single router then this mode of traffic balancing will be suboptimal. broadcast or 3 - Sets a broadcast policy for fault tolerance. All transmissions are sent on all port interfaces. 802.3ad or 4 - Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all ports in the active aggregator. Requires a switch that is 802.3ad compliant. balance-tlb or 5 - Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each port interface. Incoming traffic is received by the current port. If the receiving port fails, another port takes over the MAC address of the failed port. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. balance-alb or 6 - Sets an Adaptive Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPv4 traffic. Receive load balancing is achieved through ARP negotiation. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. For details about required settings on the upstream switch, see Section 7.6, "Overview of Bonding Modes and the Required Settings on the Switch" . primary= interface_name Specifies the interface name, such as enp1s0 , of the primary device. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode. See https://www.kernel.org/doc/Documentation/networking/bonding.txt for more information. primary_reselect= value Specifies the reselection policy for the primary port. This affects how the primary port is chosen to become the active port when failure of the active port or recovery of the primary port occurs. This parameter is designed to prevent flip-flopping between the primary port and other ports. Possible values are: always or 0 (default) - The primary port becomes the active port whenever it comes back up. better or 1 - The primary port becomes the active port when it comes back up, if the speed and duplex of the primary port is better than the speed and duplex of the current active port. failure or 2 - The primary port becomes the active port only if the current active port fails and the primary port is up. The primary_reselect setting is ignored in two cases: If no ports are active, the first port to recover is made the active port. When initially assigned to a bond, the primary port is always made the active port. Changing the primary_reselect policy through sysfs will cause an immediate selection of the best active port according to the new policy. This may or may not result in a change of the active port, depending upon the circumstances resend_igmp= range Specifies the number of IGMP membership reports to be issued after a failover event. One membership report is issued immediately after the failover, subsequent packets are sent in each 200ms interval. The valid range is 0 to 255 ; the default value is 1 . A value of 0 prevents the IGMP membership report from being issued in response to the failover event. This option is useful for bonding modes balance-rr (mode 0), active-backup (mode 1), balance-tlb (mode 5) and balance-alb (mode 6), in which a failover can switch the IGMP traffic from one port to another. Therefore a fresh IGMP report must be issued to cause the switch to forward the incoming IGMP traffic over the newly selected port. updelay= time_in_milliseconds Specifies (in milliseconds) how long to wait before enabling a link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. use_carrier= number Specifies whether or not miimon should use MII/ETHTOOL ioctls or netif_carrier_ok() to determine the link state. The netif_carrier_ok() function relies on the device driver to maintains its state with netif_carrier_ on/off ; most device drivers support this function. The MII/ETHTOOL ioctls tools utilize a deprecated calling sequence within the kernel. However, this is still configurable in case your device driver does not support netif_carrier_ on/off . Valid values are: 1 - Default setting. Enables the use of netif_carrier_ok() . 0 - Enables the use of MII/ETHTOOL ioctls. Note If the bonding interface insists that the link is up when it should not be, it is possible that your network device driver does not support netif_carrier_ on/off . xmit_hash_policy= value Selects the transmit hash policy used for port selection in balance-xor and 802.3ad modes. Possible values are: 0 or layer2 - Default setting. This parameter uses the XOR of hardware MAC addresses to generate the hash. The formula used is: This algorithm will place all traffic to a particular network peer on the same port, and is 802.3ad compliant. 1 or layer3+4 - Uses upper layer protocol information (when available) to generate the hash. This allows for traffic to a particular network peer to span multiple ports, although a single connection will not span multiple ports. The formula for unfragmented TCP and UDP packets used is: For fragmented TCP or UDP packets and all other IP protocol traffic, the source and destination port information is omitted. For non- IP traffic, the formula is the same as the layer2 transmit hash policy. This policy intends to mimic the behavior of certain switches; particularly, Cisco switches with PFC2 as well as some Foundry and IBM products. The algorithm used by this policy is not 802.3ad compliant. 2 or layer2+3 - Uses a combination of layer2 and layer3 protocol information to generate the hash. Uses XOR of hardware MAC addresses and IP addresses to generate the hash. The formula is: This algorithm will place all traffic to a particular network peer on the same port. For non- IP traffic, the formula is the same as for the layer2 transmit hash policy. This policy is intended to provide a more balanced distribution of traffic than layer2 alone, especially in environments where a layer3 gateway device is required to reach most destinations. This algorithm is 802.3ad compliant. | [
"~]# ifup bond0",
"~]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 52:54:00:e9:ce:d2 brd ff:ff:ff:ff:ff:ff 3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 52:54:00:38:a6:4c brd ff:ff:ff:ff:ff:ff 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 52:54:00:38:a6:4c brd ff:ff:ff:ff:ff:ff",
"~]USD cat /sys/class/net/bonding_masters bond0",
"~]# ifdown bond0",
"~]# echo 1000 > /sys/class/net/bond0/bonding/miimon",
"~]# echo 6 > /sys/class/net/bond0/bonding/mode",
"~]# echo balance-alb > /sys/class/net/bond0/bonding/mode",
"~]# ethtool interface_name | grep \"Link detected:\"",
"Link detected: yes",
"( source_MAC_address XOR destination_MAC ) MODULO slave_count",
"(( source_port XOR dest_port ) XOR (( source_IP XOR dest_IP ) AND 0xffff ) MODULO slave_count",
"((( source_IP XOR dest_IP ) AND 0xffff ) XOR ( source_MAC XOR destination_MAC )) MODULO slave_count"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding |
Chapter 14. Using Kerberos (GSSAPI) authentication | Chapter 14. Using Kerberos (GSSAPI) authentication AMQ Streams supports the use of the Kerberos (GSSAPI) authentication protocol for secure single sign-on access to your Kafka cluster. GSSAPI is an API wrapper for Kerberos functionality, insulating applications from underlying implementation changes. Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the Kerberos Key Distribution Centre (KDC). 14.1. Setting up AMQ Streams to use Kerberos (GSSAPI) authentication This procedure shows how to configure AMQ Streams so that Kafka clients can access Kafka and ZooKeeper using Kerberos (GSSAPI) authentication. The procedure assumes that a Kerberos krb5 resource server has been set up on a Red Hat Enterprise Linux host. The procedure shows, with examples, how to configure: Service principals Kafka brokers to use the Kerberos login ZooKeeper to use Kerberos login Producer and consumer clients to access Kafka using Kerberos authentication The instructions describe Kerberos set up for a single ZooKeeper and Kafka installation on a single host, with additional configuration for a producer and consumer client. Prerequisites To be able to configure Kafka and ZooKeeper to authenticate and authorize Kerberos credentials, you will need: Access to a Kerberos server A Kerberos client on each Kafka broker host For more information on the steps to set up a Kerberos server, and clients on broker hosts, see the example Kerberos on RHEL set up configuration . How you deploy Kerberos depends on your operating system. Red Hat recommends using Identity Management (IdM) when setting up Kerberos on Red Hat Enterprise Linux. Users of an Oracle or IBM JDK must install a Java Cryptography Extension (JCE). Oracle JCE IBM JCE Add service principals for authentication From your Kerberos server, create service principals (users) for ZooKeeper, Kafka brokers, and Kafka producer and consumer clients. Service principals must take the form SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-REALM . Create the service principals, and keytabs that store the principal keys, through the Kerberos KDC. For example: zookeeper/[email protected] kafka/[email protected] producer1/[email protected] consumer1/[email protected] The ZooKeeper service principal must have the same hostname as the zookeeper.connect configuration in the Kafka config/server.properties file: zookeeper.connect= node1.example.redhat.com :2181 If the hostname is not the same, localhost is used and authentication will fail. Create a directory on the host and add the keytab files: For example: /opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab Ensure the kafka user can access the directory: chown kafka:kafka -R /opt/kafka/krb5 Configure ZooKeeper to use a Kerberos Login Configure ZooKeeper to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for zookeeper . Create or modify the opt/kafka/config/jaas.conf file to support ZooKeeper client and server operations: Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" 4 principal="zookeeper/[email protected]"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; 1 Set to true to get the principal key from the keytab. 2 Set to true to store the principal key. 3 Set to true to obtain the Ticket Granting Ticket (TGT) from the ticket cache. 4 The keyTab property points to the location of the keytab file copied from the Kerberos KDC. The location and file must be readable by the kafka user. 5 The principal property is configured to match the fully-qualified principal name created on the KDC host, which follows the format SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-NAME . Edit opt/kafka/config/zookeeper.properties to use the updated JAAS configuration: # ... requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20 1 Controls the frequency for login renewal in milliseconds, which can be adjusted to suit ticket renewal intervals. Default is one hour. 2 Dictates whether the hostname is used as part of the login principal name. If using a single keytab for all nodes in the cluster, this is set to true . However, it is recommended to generate a separate keytab and fully-qualified principal for each broker host for troubleshooting. 3 Controls whether the realm name is stripped from the principal name for Kerberos negotiations. It is recommended that this setting is set as false . 4 Enables SASL authentication mechanisms for the ZooKeeper server and client. 5 The RequireSasl properties controls whether SASL authentication is required for quorum events, such as master elections. 6 The loginContext properties identify the name of the login context in the JAAS configuration used for authentication configuration of the specified component. The loginContext names correspond to the names of the relevant sections in the opt/kafka/config/jaas.conf file. 7 Controls the naming convention to be used to form the principal name used for identification. The placeholder _HOST is automatically resolved to the hostnames defined by the server.1 properties at runtime. Start ZooKeeper with JVM parameters to specify the Kerberos login configuration: su - kafka export EXTRA_ARGS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties If you are not using the default service name ( zookeeper ), add the name using the -Dzookeeper.sasl.client.username= NAME parameter. Note If you are using the /etc/krb5.conf location, you do not need to specify -Djava.security.krb5.conf=/etc/krb5.conf when starting ZooKeeper, Kafka, or the Kafka producer and consumer. Configure the Kafka broker server to use a Kerberos login Configure Kafka to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for kafka . Modify the opt/kafka/config/jaas.conf file with the following elements: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; Configure each broker in the Kafka cluster by modifying the listener configuration in the config/server.properties file so the listeners use the SASL/GSSAPI login. Add the SASL protocol to the map of security protocols for the listener, and remove any unwanted protocols. For example: # ... broker.id=0 # ... listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION # ... listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 # .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5 ... 1 Two listeners are configured: a secure listener for general-purpose communications with clients (supporting TLS for communications), and a replication listener for inter-broker communications. 2 For TLS-enabled listeners, the protocol name is SASL_PLAINTEXT. For non-TLS-enabled connectors, the protocol name is SASL_PLAINTEXT. If SSL is not required, you can remove the ssl.* properties. 3 SASL mechanism for Kerberos authentication is GSSAPI . 4 Kerberos authentication for inter-broker communication. 5 The name of the service used for authentication requests is specified to distinguish it from other services that may also be using the same Kerberos configuration. Start the Kafka broker, with JVM parameters to specify the Kerberos login configuration: su - kafka export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties If the broker and ZooKeeper cluster were previously configured and working with a non-Kerberos-based authentication system, it is possible to start the ZooKeeper and broker cluster and check for configuration errors in the logs. After starting the broker and Zookeeper instances, the cluster is now configured for Kerberos authentication. Configure Kafka producer and consumer clients to use Kerberos authentication Configure Kafka producer and consumer clients to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for producer1 and consumer1 . Add the Kerberos configuration to the producer or consumer configuration file. For example: /opt/kafka/config/producer.properties # ... sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ 4 useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/producer1.keytab" \ principal="producer1/[email protected]"; # ... 1 Configuration for Kerberos (GSSAPI) authentication. 2 Kerberos uses the SASL plaintext (username/password) security protocol. 3 The service principal (user) for Kafka that was configured in the Kerberos KDC. 4 Configuration for the JAAS using the same properties defined in jaas.conf . /opt/kafka/config/consumer.properties # ... sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/consumer1.keytab" \ principal="consumer1/[email protected]"; # ... Run the clients to verify that you can send and receive messages from the Kafka brokers. Producer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Consumer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Additional resources Kerberos man pages: krb5.conf(5), kinit(1), klist(1), and kdestroy(1) Example Kerberos server on RHEL set up configuration Example client application to authenticate with a Kafka cluster using Kerberos tickets | [
"zookeeper.connect= node1.example.redhat.com :2181",
"/opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab",
"chown kafka:kafka -R /opt/kafka/krb5",
"Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" 4 principal=\"zookeeper/[email protected]\"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; };",
"requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20",
"su - kafka export EXTRA_ARGS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; };",
"broker.id=0 listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5",
"su - kafka export KAFKA_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \\ 4 useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/producer1.keytab\" principal=\"producer1/[email protected]\";",
"sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/consumer1.keytab\" principal=\"consumer1/[email protected]\";",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/assembly-kerberos_str |
Chapter 2. Check your Connectivity Link installation and permissions | Chapter 2. Check your Connectivity Link installation and permissions This guide expects that you have successfully installed Connectivity Link on at least one OpenShift cluster, and that you have the correct user permissions. Prerequisites You completed the Connectivity Link installation steps on one or more clusters, as described in Installing Connectivity Link on OpenShift . You have the kubectl or oc command installed. You have write access to the OpenShift namespaces used in this guide. You have an AWS account with Amazon Route 53 and a DNS zone for the examples in this guide. Google Cloud DNS and Microsoft Azure DNS are also supported. Optional: For rate limiting in a multicluster environment, you have installed Connectivity Link on more than one cluster and have a shared accessible Redis-based datastore. For more details, see Installing Connectivity Link on OpenShift . For Observability, OpenShift user workload monitoring is configured to remote write to a central storage system such as Thanos, as described in Connectivity Link Observability Guide . | null | https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/configuring_and_deploying_gateway_policies_with_connectivity_link/rhcl-deploy-prerequisites_rhcl |
2.6. Business Metadata | 2.6. Business Metadata Business metadata represents additional information about a piece of data, not necessarily related to its physical storage in the enterprise information system or data access requirements. It can also represent descriptions, business rules, and other additional information about a piece of data. Continuing with our example of the ZIP Code column in the address book database, the following represents business metadata we may know about the ZIP code: The first five characters represent the five ZIP code numbers, the final four represent the ZIP Plus Four digits if available, or 0000 if not The application used to populate this field in the database strictly enforces the integrity of the data format Although the first might seem technical, it does not directly relate to the physical storage of the data. It represents a business rule applied to the contents of the column, not the contents themselves. The second, of course, represents some business information about the way the column was populated. This information, although useful to associate with our definition of the column, does not reflect the physical storage of the data. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/business_metadata |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_infiniband_and_rdma_networks/proc_providing-feedback-on-red-hat-documentation_configuring-infiniband-and-rdma-networks |
Chapter 164. InfluxDB Component | Chapter 164. InfluxDB Component Available as of Camel version 2.18 This component allows you to interact with InfluxDB https://influxdata.com/time-series-platform/influxdb/ a time series database. The native body type for this component is Point (the native influxdb class), but it can also accept Map<String, Object> as message body and it will get converted to Point.class, please note that the map must contain an element with InfluxDbConstants.MEASUREMENT_NAME as key. Aditionally of course you may register your own Converters to your data type to Point, or use the (un)marshalling tools provided by camel. From Camel 2.18 onwards Influxdb requires Java 8. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-influxdb</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 164.1. URI format influxdb://beanName?[options] 164.2. URI Options The producer allows sending messages to a influxdb configured in the registry, using the native java driver. The InfluxDB component has no options. The InfluxDB endpoint is configured using URI syntax: with the following path and query parameters: 164.2.1. Path Parameters (1 parameters): Name Description Default Type connectionBean Required Connection to the influx database, of class InfluxDB.class String 164.2.2. Query Parameters (6 parameters): Name Description Default Type batch (producer) Define if this operation is a batch operation or not false boolean databaseName (producer) The name of the database where the time series will be stored String operation (producer) Define if this operation is an insert or a query insert String query (producer) Define the query in case of operation query String retentionPolicy (producer) The string that defines the retention policy to the data created by the endpoint default String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 164.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.influxdb.enabled Enable influxdb component true Boolean camel.component.influxdb.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 164.4. Message Headers Name Default Value Type Context Description 164.5. Example Below is an example route that stores a point into the db (taking the db name from the URI) specific key: from("direct:start") .setHeader(InfluxDbConstants.DBNAME_HEADER, constant("myTimeSeriesDB")) .to("influxdb://connectionBean); from("direct:start") .to("influxdb://connectionBean?databaseName=myTimeSeriesDB"); For more information, see these resources... 164.6. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-influxdb</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"influxdb://beanName?[options]",
"influxdb:connectionBean",
"from(\"direct:start\") .setHeader(InfluxDbConstants.DBNAME_HEADER, constant(\"myTimeSeriesDB\")) .to(\"influxdb://connectionBean);",
"from(\"direct:start\") .to(\"influxdb://connectionBean?databaseName=myTimeSeriesDB\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/influxdb-component |
Chapter 29. Insert Field Action | Chapter 29. Insert Field Action Adds a custom field with a constant value to the message in transit 29.1. Configuration Options The following table summarizes the configuration options available for the insert-field-action Kamelet: Property Name Description Type Default Example field * Field The name of the field to be added string value * Value The value of the field string Note Fields marked with an asterisk (*) are mandatory. 29.2. Dependencies At runtime, the insert-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:core camel:jackson camel:kamelet 29.3. Usage This section describes how you can use the insert-field-action . 29.3.1. Knative Action You can use the insert-field-action Kamelet as an intermediate step in a Knative binding. insert-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"foo":"John"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: "The Field" value: "The Value" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 29.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 29.3.1.2. Procedure for using the cluster CLI Save the insert-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f insert-field-action-binding.yaml 29.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 29.3.2. Kafka Action You can use the insert-field-action Kamelet as an intermediate step in a Kafka binding. insert-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"foo":"John"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: "The Field" value: "The Value" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 29.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 29.3.2.2. Procedure for using the cluster CLI Save the insert-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f insert-field-action-binding.yaml 29.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 29.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/insert-field-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"foo\":\"John\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: \"The Field\" value: \"The Value\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f insert-field-action-binding.yaml",
"kamel bind --name insert-field-action-binding timer-source?message='{\"foo\":\"John\"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"foo\":\"John\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-field-action properties: field: \"The Field\" value: \"The Value\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f insert-field-action-binding.yaml",
"kamel bind --name insert-field-action-binding timer-source?message='{\"foo\":\"John\"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/insert-field-action |
Chapter 7. Quotas | Chapter 7. Quotas 7.1. Resource quotas per project A resource quota , defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project. This guide describes how resource quotas work, how cluster administrators can set and manage resource quotas on a per project basis, and how developers and cluster administrators can view them. 7.1.1. Resources managed by quotas The following describes the set of compute resources and object types that can be managed by a quota. Note A pod is in a terminal state if status.phase in (Failed, Succeeded) is true. Table 7.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. Table 7.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. Table 7.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of ReplicationControllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. services.loadbalancers The total number of services of type LoadBalancer that can exist in the project. services.nodeports The total number of services of type NodePort that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of imagestreams that can exist in the project. 7.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description BestEffort Match pods that have best effort quality of service for either cpu or memory . NotBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu 7.1.3. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system. 7.1.4. Requests versus limits When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 7.1.5. Sample resource quota definitions core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 services.loadbalancers: "2" 6 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. 6 The total number of services of type LoadBalancer that can exist in the project. openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 limits.cpu: "2" 4 limits.memory: 2Gi 5 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 5 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 scopes: - NotTerminating 4 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods fall under NotTerminating unless the RestartNever policy is applied. compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 scopes: - Terminating 4 1 The total number of pods in a terminating state. 2 Across all pods in a terminating state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a terminating state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota charges for build or deployer pods, but not long running pods like a web server or database. storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 8 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 9 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. 7.1.6. Creating a quota You can create a quota to constrain resource usage in a given project. Procedure Define the quota in a file. Use the file to create the quota and apply it to a project: USD oc create -f <file> [-n <project_name>] For example: USD oc create -f core-object-counts.yaml -n demoproject 7.1.6.1. Creating object count quotas You can create an object count quota for all standard namespaced resource types on Red Hat OpenShift Service on AWS, such as BuildConfig and DeploymentConfig objects. An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota upon creation. These types of quotas are useful to protect against exhaustion of resources. The quota can only be created if there are enough spare resources within the project. Procedure To configure an object count quota for a resource: Run the following command: USD oc create quota <name> \ --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1 1 The <resource> variable is the name of the resource, and <group> is the API group, if applicable. Use the oc api-resources command for a list of resources and their associated API groups. For example: USD oc create quota test \ --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 Example output resourcequota "test" created This example limits the listed resources to the hard limit in each project in the cluster. Verify that the quota was created: USD oc describe quota test Example output Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 7.1.6.2. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. is allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure Determine how many GPUs are available on a node in your cluster. For example: # oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0 In this example, 2 GPUs are available. Create a ResourceQuota object to set a quota in the namespace nvidia . In this example, the quota is 1 : Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota: # oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Define a pod that asks for a single GPU. The following example definition file is called gpu-pod.yaml : apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Create the pod: # oc create -f gpu-pod.yaml Verify that the pod is running: # oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: # oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 7.1.7. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details. Procedure Get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject Example output NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10 Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Example output Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 7.1.8. Configuring explicit resource quotas Configure explicit resource quotas in a project request template to apply specific resource quotas in new projects. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Add a resource quota definition to a project request template: If a project request template does not exist in a cluster: Create a bootstrap project template and output it to a file called template.yaml : USD oc adm create-bootstrap-project-template -o yaml > template.yaml Add a resource quota definition to template.yaml . The following example defines a resource quota named 'storage-consumption'. The definition must be added before the parameters: section in the template: - apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project. 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot create claims. Create a project request template from the modified template.yaml file in the openshift-config namespace: USD oc create -f template.yaml -n openshift-config Note To include the configuration as a kubectl.kubernetes.io/last-applied-configuration annotation, add the --save-config option to the oc create command. By default, the template is called project-request . If a project request template already exists within a cluster: Note If you declaratively or imperatively manage objects within your cluster by using configuration files, edit the existing project request template through those files instead. List templates in the openshift-config namespace: USD oc get templates -n openshift-config Edit an existing project request template: USD oc edit template <project_request_template> -n openshift-config Add a resource quota definition, such as the preceding storage-consumption example, into the existing template. The definition must be added before the parameters: section in the template. If you created a project request template, reference it in the cluster's project configuration resource: Access the project configuration resource for editing: By using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . By using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section of the project configuration resource to include the projectRequestTemplate and name parameters. The following example references the default project request template name project-request : apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: project-request Verify that the resource quota is applied when projects are created: Create a project: USD oc new-project <project_name> List the project's resource quotas: USD oc get resourcequotas Describe the resource quota in detail: USD oc describe resourcequotas <resource_quota_name> 7.2. Resource quotas across multiple projects A multi-project quota, defined by a ClusterResourceQuota object, allows quotas to be shared across multiple projects. Resources used in each selected project are aggregated and that aggregate is used to limit resources across all the selected projects. This guide describes how cluster administrators can set and manage resource quotas across multiple projects. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 7.2.1. Selecting multiple projects during quota creation When creating quotas, you can select multiple projects based on annotation selection, label selection, or both. Procedure To select projects based on annotations, run the following command: USD oc create clusterquota for-user \ --project-annotation-selector openshift.io/requester=<user_name> \ --hard pods=10 \ --hard secrets=20 This creates the following ClusterResourceQuota object: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: "10" secrets: "20" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" total: 5 hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" 1 The ResourceQuotaSpec object that will be enforced over the selected projects. 2 A simple key-value selector for annotations. 3 A label selector that can be used to select projects. 4 A per-namespace map that describes current quota usage in each selected project. 5 The aggregate usage across all selected projects. This multi-project quota document controls all projects requested by <user_name> using the default project request endpoint. You are limited to 10 pods and 20 secrets. Similarly, to select projects based on labels, run this command: USD oc create clusterresourcequota for-name \ 1 --project-label-selector=name=frontend \ 2 --hard=pods=10 --hard=secrets=20 1 Both clusterresourcequota and clusterquota are aliases of the same command. for-name is the name of the ClusterResourceQuota object. 2 To select projects by label, provide a key-value pair by using the format --project-label-selector=key=value . This creates the following ClusterResourceQuota object definition: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend 7.2.2. Viewing applicable cluster resource quotas A project administrator is not allowed to create or modify the multi-project quota that limits his or her project, but the administrator is allowed to view the multi-project quota documents that are applied to his or her project. The project administrator can do this via the AppliedClusterResourceQuota resource. Procedure To view quotas applied to a project, run: USD oc describe AppliedClusterResourceQuota Example output Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20 7.2.3. Selection granularity Because of the locking consideration when claiming quota allocations, the number of active projects selected by a multi-project quota is an important consideration. Selecting more than 100 projects under a single multi-project quota can have detrimental effects on API server responsiveness in those projects. | [
"apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6",
"apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9",
"oc create -f <file> [-n <project_name>]",
"oc create -f core-object-counts.yaml -n demoproject",
"oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1",
"oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4",
"resourcequota \"test\" created",
"oc describe quota test",
"Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4",
"oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'",
"openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0",
"apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1",
"oc create -f gpu-quota.yaml",
"resourcequota/gpu-quota created",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1",
"apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1",
"oc create -f gpu-pod.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1",
"oc create -f gpu-pod.yaml",
"Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1",
"oc get quota -n demoproject",
"NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10",
"oc describe quota core-object-counts -n demoproject",
"Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f template.yaml -n openshift-config",
"oc get templates -n openshift-config",
"oc edit template <project_request_template> -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request",
"oc new-project <project_name>",
"oc get resourcequotas",
"oc describe resourcequotas <resource_quota_name>",
"oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"",
"oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend",
"oc describe AppliedClusterResourceQuota",
"Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/building_applications/quotas |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.