title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Validation and troubleshooting
Validation and troubleshooting OpenShift Container Platform 4.16 Validating and troubleshooting an OpenShift Container Platform installation Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/validation_and_troubleshooting/index
Chapter 5. Advisories related to this release
Chapter 5. Advisories related to this release The following advisories have been issued to bugfixes and to CVE fixes included in this release: RHSA-2022:0161 RHSA-2022:0165 RHSA-2022:0166 Revised on 2024-05-03 15:36:17 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.2/openjdk-1702-advisory
Managing hosts
Managing hosts Red Hat Satellite 6.15 Register hosts to Satellite, configure host groups and collections, set up remote execution, manage packages on hosts, monitor hosts, and more Red Hat Satellite Documentation Team [email protected]
[ "hammer host create --ask-root-password yes --hostgroup \" My_Host_Group \" --interface=\"primary=true, provision=true, mac= My_MAC_Address , ip= My_IP_Address \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \"", "subscription-manager syspurpose set usage ' Production ' subscription-manager syspurpose set role ' Red Hat Enterprise Linux Server ' subscription-manager syspurpose add addons ' your_addon '", "subscription-manager syspurpose", "subscription-manager attach --auto", "subscription-manager status", "hammer host delete --id My_Host_ID --location-id My_Location_ID --organization-id My_Organization_ID", "mkdir /etc/puppetlabs/code/environments/ example_environment", "hammer hostgroup create --name \"Base\" --architecture \"My_Architecture\" --content-source-id _My_Content_Source_ID_ --content-view \"_My_Content_View_\" --domain \"_My_Domain_\" --lifecycle-environment \"_My_Lifecycle_Environment_\" --locations \"_My_Location_\" --medium-id _My_Installation_Medium_ID_ --operatingsystem \"_My_Operating_System_\" --organizations \"_My_Organization_\" --partition-table \"_My_Partition_Table_\" --puppet-ca-proxy-id _My_Puppet_CA_Proxy_ID_ --puppet-environment \"_My_Puppet_Environment_\" --puppet-proxy-id _My_Puppet_Proxy_ID_ --root-pass \"My_Password\" --subnet \"_My_Subnet_\"", "MAJOR=\" My_Major_OS_Version \" ARCH=\" My_Architecture \" ORG=\" My_Organization \" LOCATIONS=\" My_Location \" PTABLE_NAME=\" My_Partition_Table \" DOMAIN=\" My_Domain \" hammer --output csv --no-headers lifecycle-environment list --organization \"USD{ORG}\" | cut -d ',' -f 2 | while read LC_ENV; do [[ USD{LC_ENV} == \"Library\" ]] && continue hammer hostgroup create --name \"rhel-USD{MAJOR}server-USD{ARCH}-USD{LC_ENV}\" --architecture \"USD{ARCH}\" --partition-table \"USD{PTABLE_NAME}\" --domain \"USD{DOMAIN}\" --organizations \"USD{ORG}\" --query-organization \"USD{ORG}\" --locations \"USD{LOCATIONS}\" --lifecycle-environment \"USD{LC_ENV}\" done", "systemctl enable --now chronyd", "chkconfig --add ntpd chkconfig ntpd on service ntpd start", "cp My_SSL_CA_file .pem /etc/pki/ca-trust/source/anchors", "update-ca-trust", "mkdir /etc/puppetlabs/code/environments/ example_environment", "curl -O http:// satellite.example.com /pub/bootstrap.py", "chmod +x bootstrap.py", "/usr/libexec/platform-python bootstrap.py -h", "./bootstrap.py -h", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"", "./bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"", "rm bootstrap.py", "ROLE='Bootstrap' hammer role create --name \"USDROLE\" hammer filter create --role \"USDROLE\" --permissions view_organizations hammer filter create --role \"USDROLE\" --permissions view_locations hammer filter create --role \"USDROLE\" --permissions view_domains hammer filter create --role \"USDROLE\" --permissions view_hostgroups hammer filter create --role \"USDROLE\" --permissions view_hosts hammer filter create --role \"USDROLE\" --permissions view_architectures hammer filter create --role \"USDROLE\" --permissions view_ptables hammer filter create --role \"USDROLE\" --permissions view_operatingsystems hammer filter create --role \"USDROLE\" --permissions create_hosts", "hammer user add-role --id user_id --role Bootstrap", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet", "/usr/libexec/platform-python bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman", "bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain", "hammer settings set --name create_new_host_when_facts_are_uploaded --value false hammer settings set --name create_new_host_when_report_is_uploaded --value false", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com", "yum install katello-host-tools-tracer", "katello-tracer-upload", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap", "hammer host create --ask-root-password yes --hostgroup My_Host_Group --ip= My_IP_Address --mac= My_MAC_Address --managed true --interface=\"identifier= My_NIC_1, mac=_My_MAC_Address_1 , managed=true, type=Nic::Managed, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --interface=\"identifier= My_NIC_2 , mac= My_MAC_Address_2 , managed=true, type=Nic::Managed, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --interface=\"identifier= bond0 , ip= My_IP_Address_2 , type=Nic::Bond, mode=active-backup, attached_devices=[ My_NIC_1 , My_NIC_2 ], managed=true, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \" --subnet-id= My_Subnet_ID", "satellite-maintain packages install ipmitool", "satellite-installer --foreman-proxy-bmc-default-provider=ipmitool --foreman-proxy-bmc=true", "satellite-installer --enable-foreman-plugin-leapp", "satellite-installer --enable-foreman-plugin-remote-execution-cockpit --reset-foreman-plugin-remote-execution-cockpit-ensure", "satellite-installer --foreman-plugin-remote-execution-cockpit-ensure absent", "satellite-installer --foreman-proxy-plugin-remote-execution-script-cockpit-integration false", "hammer report-template list", "hammer report-template generate --id My_Template_ID", "hammer report-template generate --inputs \"Days from Now=no limit\" --name \"Subscription - Entitlement Report\"", "hammer report-template generate --inputs \"Days from Now=60\" --name \"Subscription - Entitlement Report\"", "hammer report-template list", "hammer report-template dump --id My_Template_ID > example_export .erb", "curl --insecure --user admin:redhat --request GET --config https:// satellite.example.com /api/report_templates | json_reformat", "{ \"total\": 6, \"subtotal\": 6, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": [ { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applicable errata\", \"id\": 112 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applied Errata\", \"id\": 113 }, { \"created_at\": \"2019-11-30 16:15:24 UTC\", \"updated_at\": \"2019-11-30 16:15:24 UTC\", \"name\": \"Hosts - complete list\", \"id\": 158 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Host statuses\", \"id\": 114 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Registered hosts\", \"id\": 115 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Subscriptions\", \"id\": 116 } ] }", "curl --insecure --output /tmp/_Example_Export_Template .erb_ --user admin:password --request GET --config https:// satellite.example.com /api/report_templates/ My_Template_ID /export", "cat Example_Template .json { \"name\": \" Example Template Name \", \"template\": \" Enter ERB Code Here \" }", "{ \"name\": \"Hosts - complete list\", \"template\": \" <%# name: Hosts - complete list snippet: false template_inputs: - name: host required: false input_type: user advanced: false value_type: plain resource_type: Katello::ActivationKey model: ReportTemplate -%> <% load_hosts(search: input('host')).each_record do |host| -%> <% report_row( 'Server FQDN': host.name ) -%> <% end -%> <%= report_render %> \" }", "curl --insecure --user admin:redhat --data @ Example_Template .json --header \"Content-Type:application/json\" --request POST --config https:// satellite.example.com /api/report_templates/import", "curl --insecure --user admin:redhat --request GET --config https:// satellite.example.com /api/report_templates | json_reformat", "<%# name: Entitlements snippet: false model: ReportTemplate require: - plugin: katello version: 3.14.0 -%>", "<%- load_hosts(includes: [:lifecycle_environment, :operatingsystem, :architecture, :content_view, :organization, :reported_data, :subscription_facet, :pools => [:subscription]]).each_record do |host| -%>", "<%- host.pools.each do |pool| -%>", "<%- report_row( 'Name': host.name, 'Organization': host.organization, 'Lifecycle Environment': host.lifecycle_environment, 'Content View': host.content_view, 'Host Collections': host.host_collections, 'Virtual': host.virtual, 'Guest of Host': host.hypervisor_host, 'OS': host.operatingsystem, 'Arch': host.architecture, 'Sockets': host.sockets, 'RAM': host.ram, 'Cores': host.cores, 'SLA': host_sla(host), 'Products': host_products(host), 'Subscription Name': sub_name(pool), 'Subscription Type': pool.type, 'Subscription Quantity': pool.quantity, 'Subscription SKU': sub_sku(pool), 'Subscription Contract': pool.contract_number, 'Subscription Account': pool.account_number, 'Subscription Start': pool.start_date, 'Subscription End': pool.end_date, 'Subscription Guest': registered_through(host) ) -%>", "<%- end -%> <%- end -%>", "<%= report_render -%>", "hammer host-collection create --name \" My_Host_Collection \" --organization \" My_Organization \"", "hammer host-collection add-host --host-ids My_Host_ID_1 --id My_Host_Collection_ID", "hammer host-collection add-host --host-ids My_Host_ID_1 , My_Host_ID_2 --id My_Host_Collection_ID", "subscription-manager refresh", "name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh", "dnf install katello-pull-transport-migrate", "yum install katello-pull-transport-migrate", "systemctl status yggdrasild", "hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH", "curl -X GET -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_capsule_ID", "curl -X PUT -H 'Content-Type: application/json' -d '{ \"playbook_names\": [\" My_Playbook_Name \"] }' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID", "curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID", "hammer settings set --name=remote_execution_fallback_proxy --value=true", "hammer settings set --name=remote_execution_global_proxy --value=true", "mkdir /My_Remote_Working_Directory", "chcon --reference=/var/tmp /My_Remote_Working_Directory", "satellite-installer --foreman-proxy-plugin-remote-execution-script-remote-working-dir /My_Remote_Working_Directory", "mkdir /My_Remote_Working_Directory", "systemctl edit yggdrasild", "Environment=FOREMAN_YGG_WORKER_WORKDIR= /My_Remote_Working_Directory", "systemctl restart yggdrasild", "ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]", "ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]", "ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy", "mkdir ~/.ssh", "curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys", "chmod 700 ~/.ssh", "chmod 600 ~/.ssh/authorized_keys", "<%= snippet 'remote_execution_ssh_keys' %>", "id -u foreman-proxy", "umask 077", "mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"", "cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab", "chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"", "chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"", "restorecon -RvF /var/kerberos/krb5", "satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true", "hostgroup_fullname ~ \" My_Host_Group *\"", "hammer settings set --name=remote_execution_global_proxy --value=false", "hammer job-template list", "hammer job-template info --id My_Template_ID", "hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"", "hammer job-invocation list", "hammer job-invocation output --host My_Host_Name --id My_Job_ID", "hammer job-invocation cancel --id My_Job_ID", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200", "global_status = ok", "global_status = error or global_status = warning", "status.pending > 0", "status.restarted > 0", "status.interesting = true", "satellite-installer --enable-foreman-plugin-templates", "mkdir --parents /usr/share/foreman/.config/git", "touch /usr/share/foreman/.config/git/config", "chown --recursive foreman /usr/share/foreman/.config", "sudo --user foreman git config --global http.sslCAPath Path_To_CA_Certificate", "sudo --user foreman ssh-keygen", "sudo --user foreman ssh git.example.com", "<%# kind: provision name: My_Provisioning_Template oses: - My_first_OS - My_second_OS locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>", "mkdir /var/lib/foreman/ My_Templates_Dir", "chown foreman /var/lib/foreman/ My_Templates_Dir", "<%# kind: provision name: My_Provisioning_Template oses: - My_first_OS - My_second_OS locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>", "hammer import-templates --branch \" My_Branch \" --filter '.* Template NameUSD ' --organization \" My_Organization \" --prefix \"[ Custom Index ] \" --repo \" https://git.example.com/path/to/repository \"", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login : password -k https:// satellite.example.com /api/v2/templates/import -X POST", "hammer export-templates --organization \" My_Organization \" --repo \" https://git.example.com/path/to/repository \"", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login : password -k https:// satellite.example.com /api/v2/templates/export -X POST", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login:password -k https:// satellite.example.com /api/v2/templates/export -X POST -d \"{\\\"repo\\\":\\\"git.example.com/templates\\\"}\"", "curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool", "{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_install\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }", "curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool", "{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_update\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }", "curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool", "{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_remove\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }", "rpm --query yggdrasil", "systemctl status yggdrasil com.redhat.Yggdrasil1.Worker1.foreman", "dnf install foreman_ygg_migration", "systemctl status yggdrasil com.redhat.Yggdrasil1.Worker1.foreman", "<% if @host.operatingsystem.family == \"Redhat\" && @host.operatingsystem.major.to_i > 6 -%> systemctl <%= input(\"action\") %> <%= input(\"service\") %> <% else -%> service <%= input(\"service\") %> <%= input(\"action\") %> <% end -%>", "echo <%= @host.name %>", "host.example.com", "<% server_name = @host.fqdn %> <%= server_name %>", "host.example.com", "<%= @ example_incorrect_variable .fqdn -%>", "undefined method `fqdn' for nil:NilClass", "<%= \"line1\" %> <%= \"line2\" %>", "line1 line2", "<%= \"line1\" -%> <%= \"line2\" %>", "line1line2", "<%= @host.fqdn -%> <%= @host.ip -%>", "host.example.com10.10.181.216", "<%# A comment %>", "<%- load_hosts.each do |host| -%> <%- if host.build? %> <%= host.name %> build is in progress <%- end %> <%- end %>", "<%= input('cpus') %>", "<%- load_hosts().each_record do |host| -%> <%= host.name %>", "<% load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%= host.name %> <% end -%>", "<%- load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name ) -%> <%- end -%> <%= report_render -%>", "Server FQDN host1.example.com host2.example.com host3.example.com host4.example.com host5.example.com host6.example.com", "<%- load_hosts(search: input('host')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name, 'IP': host.ip ) -%> <%- end -%> <%= report_render -%>", "Server FQDN,IP host1.example.com , 10.8.30.228 host2.example.com , 10.8.30.227 host3.example.com , 10.8.30.226 host4.example.com , 10.8.30.225 host5.example.com , 10.8.30.224 host6.example.com , 10.8.30.223", "<%= report_render -%>", "truthy?(\"true\") => true truthy?(1) => true truthy?(\"false\") => false truthy?(0) => false", "falsy?(\"true\") => false falsy?(1) => false falsy?(\"false\") => true falsy?(0) => true", "<% @host.ip.split('.').last %>", "<% load_hosts().each_record do |host| -%> <% if @host.name == \" host1.example.com \" -%> <% result=\"positive\" -%> <% else -%> <% result=\"negative\" -%> <% end -%> <%= result -%>", "host1.example.com positive", "<%= @host.interfaces -%>", "<Nic::Base::ActiveRecord_Associations_CollectionProxy:0x00007f734036fbe0>", "[] each find_in_batches first map size to_a", "alias? attached_devices attached_devices_identifiers attached_to bond_options children_mac_addresses domain fqdn identifier inheriting_mac ip ip6 link mac managed? mode mtu nic_delay physical? primary provision shortname subnet subnet6 tag virtual? vlanid", "<% load_hosts().each_record do |host| -%> <% host.interfaces.each do |iface| -%> iface.alias?: <%= iface.alias? %> iface.attached_to: <%= iface.attached_to %> iface.bond_options: <%= iface.bond_options %> iface.children_mac_addresses: <%= iface.children_mac_addresses %> iface.domain: <%= iface.domain %> iface.fqdn: <%= iface.fqdn %> iface.identifier: <%= iface.identifier %> iface.inheriting_mac: <%= iface.inheriting_mac %> iface.ip: <%= iface.ip %> iface.ip6: <%= iface.ip6 %> iface.link: <%= iface.link %> iface.mac: <%= iface.mac %> iface.managed?: <%= iface.managed? %> iface.mode: <%= iface.mode %> iface.mtu: <%= iface.mtu %> iface.physical?: <%= iface.physical? %> iface.primary: <%= iface.primary %> iface.provision: <%= iface.provision %> iface.shortname: <%= iface.shortname %> iface.subnet: <%= iface.subnet %> iface.subnet6: <%= iface.subnet6 %> iface.tag: <%= iface.tag %> iface.virtual?: <%= iface.virtual? %> iface.vlanid: <%= iface.vlanid %> <%- end -%>", "host1.example.com iface.alias?: false iface.attached_to: iface.bond_options: iface.children_mac_addresses: [] iface.domain: iface.fqdn: host1.example.com iface.identifier: ens192 iface.inheriting_mac: 00:50:56:8d:4c:cf iface.ip: 10.10.181.13 iface.ip6: iface.link: true iface.mac: 00:50:56:8d:4c:cf iface.managed?: true iface.mode: balance-rr iface.mtu: iface.physical?: true iface.primary: true iface.provision: true iface.shortname: host1.example.com iface.subnet: iface.subnet6: iface.tag: iface.virtual?: false iface.vlanid:", "<% pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = pm_set || host_param_true?('force-puppet') puppetlabs_enabled = host_param_true?('enable-puppetlabs-repo') %>", "<% os_major = @host.operatingsystem.major.to_i os_minor = @host.operatingsystem.minor.to_i %> <% if ((os_minor < 2) && (os_major < 14)) -%> <% end -%>", "<%= indent 4 do snippet 'subscription_manager_registration' end %>", "<% subnet = @host.subnet %> <% if subnet.respond_to?(:dhcp_boot_mode?) -%> <%= snippet 'kickstart_networking_setup' %> <% end -%>", "'Serial': host.facts['dmi::system::serial_number'], 'Encrypted': host.facts['luks_stat'],", "<%- report_row( 'Host': host.name, 'Operating System': host.operatingsystem, 'Kernel': host.facts['uname::release'], 'Environment': host.single_lifecycle_environment ? host.single_lifecycle_environment.name : nil, 'Erratum': erratum.errata_id, 'Type': erratum.errata_type, 'Published': erratum.issued, 'Applicable since': erratum.created_at, 'Severity': erratum.severity, 'Packages': erratum.package_names, 'CVEs': erratum.cves, 'Reboot suggested': erratum.reboot_suggested, ) -%>", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>", "restorecon -RvF <%= input(\"directory\") %>", "<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>", "<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/managing_hosts/index
Chapter 30. Webhook APIs
Chapter 30. Webhook APIs 30.1. Webhook APIs 30.1.1. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 30.1.2. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Type object 30.2. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 30.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . webhooks array Webhooks is a list of webhooks and the affected resources and operations. webhooks[] object MutatingWebhook describes an admission webhook and the resources and operations it applies to. 30.2.1.1. .webhooks Description Webhooks is a list of webhooks and the affected resources and operations. Type array 30.2.1.2. .webhooks[] Description MutatingWebhook describes an admission webhook and the resources and operations it applies to. Type object Required name clientConfig sideEffects admissionReviewVersions Property Type Description admissionReviewVersions array (string) AdmissionReviewVersions is an ordered list of preferred AdmissionReview versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook failurePolicy string FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail. Possible enum values: - "Fail" means that an error calling the webhook causes the admission to fail. - "Ignore" means that an error calling the webhook is ignored. matchConditions array MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. matchConditions[] object MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. matchPolicy string matchPolicy defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook. Defaults to "Equivalent" Possible enum values: - "Equivalent" means requests should be sent to the webhook if they modify a resource listed in rules via another API group or version. - "Exact" means requests should only be sent to the webhook if they exactly match a given rule. name string The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where "imagepolicy" is the name of the webhook, and kubernetes.io is the name of the organization. Required. namespaceSelector LabelSelector NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the webhook on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. reinvocationPolicy string reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are "Never" and "IfNeeded". Never: the webhook will not be called more than once in a single admission evaluation. IfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option must be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead. Defaults to "Never". Possible enum values: - "IfNeeded" indicates that the webhook may be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. - "Never" indicates that the webhook must not be called more than once in a single admission evaluation. rules array Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Possible enum values: - "None" means that calling the webhook will have no side effects. - "NoneOnDryRun" means that calling the webhook will possibly have side effects, but if the request being reviewed has the dry-run attribute, the side effects will be suppressed. - "Some" means that calling the webhook will possibly have side effects. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. - "Unknown" means that no information is known about the side effects of calling the webhook. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. timeoutSeconds integer TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds. 30.2.1.3. .webhooks[].clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 30.2.1.4. .webhooks[].clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path which will be sent in any request to this service. port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 30.2.1.5. .webhooks[].matchConditions Description MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. Type array 30.2.1.6. .webhooks[].matchConditions[] Description MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. Type object Required name expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables: 'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(/pkg/apis/admission/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ Required. name string Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', ' ' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9 .]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName') Required. 30.2.1.7. .webhooks[].rules Description Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. Type array 30.2.1.8. .webhooks[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 30.2.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations DELETE : delete collection of MutatingWebhookConfiguration GET : list or watch objects of kind MutatingWebhookConfiguration POST : create a MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations GET : watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} DELETE : delete a MutatingWebhookConfiguration GET : read the specified MutatingWebhookConfiguration PATCH : partially update the specified MutatingWebhookConfiguration PUT : replace the specified MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} GET : watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 30.2.2.1. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations Table 30.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MutatingWebhookConfiguration Table 30.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 30.3. Body parameters Parameter Type Description body DeleteOptions schema Table 30.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind MutatingWebhookConfiguration Table 30.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 30.6. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MutatingWebhookConfiguration Table 30.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 30.8. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 30.9. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 202 - Accepted MutatingWebhookConfiguration schema 401 - Unauthorized Empty 30.2.2.2. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations Table 30.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 30.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 30.2.2.3. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} Table 30.12. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration Table 30.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MutatingWebhookConfiguration Table 30.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 30.15. Body parameters Parameter Type Description body DeleteOptions schema Table 30.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MutatingWebhookConfiguration Table 30.17. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MutatingWebhookConfiguration Table 30.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 30.19. Body parameters Parameter Type Description body Patch schema Table 30.20. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MutatingWebhookConfiguration Table 30.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 30.22. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 30.23. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty 30.2.2.4. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} Table 30.24. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration Table 30.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 30.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 30.3. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Type object 30.3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . webhooks array Webhooks is a list of webhooks and the affected resources and operations. webhooks[] object ValidatingWebhook describes an admission webhook and the resources and operations it applies to. 30.3.1.1. .webhooks Description Webhooks is a list of webhooks and the affected resources and operations. Type array 30.3.1.2. .webhooks[] Description ValidatingWebhook describes an admission webhook and the resources and operations it applies to. Type object Required name clientConfig sideEffects admissionReviewVersions Property Type Description admissionReviewVersions array (string) AdmissionReviewVersions is an ordered list of preferred AdmissionReview versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook failurePolicy string FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail. Possible enum values: - "Fail" means that an error calling the webhook causes the admission to fail. - "Ignore" means that an error calling the webhook is ignored. matchConditions array MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. matchConditions[] object MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. matchPolicy string matchPolicy defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook. Defaults to "Equivalent" Possible enum values: - "Equivalent" means requests should be sent to the webhook if they modify a resource listed in rules via another API group or version. - "Exact" means requests should only be sent to the webhook if they exactly match a given rule. name string The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where "imagepolicy" is the name of the webhook, and kubernetes.io is the name of the organization. Required. namespaceSelector LabelSelector NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the webhook on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. rules array Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Possible enum values: - "None" means that calling the webhook will have no side effects. - "NoneOnDryRun" means that calling the webhook will possibly have side effects, but if the request being reviewed has the dry-run attribute, the side effects will be suppressed. - "Some" means that calling the webhook will possibly have side effects. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. - "Unknown" means that no information is known about the side effects of calling the webhook. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. timeoutSeconds integer TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds. 30.3.1.3. .webhooks[].clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 30.3.1.4. .webhooks[].clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path which will be sent in any request to this service. port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 30.3.1.5. .webhooks[].matchConditions Description MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. Type array 30.3.1.6. .webhooks[].matchConditions[] Description MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. Type object Required name expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables: 'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(/pkg/apis/admission/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ Required. name string Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', ' ' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9 .]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName') Required. 30.3.1.7. .webhooks[].rules Description Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. Type array 30.3.1.8. .webhooks[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 30.3.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations DELETE : delete collection of ValidatingWebhookConfiguration GET : list or watch objects of kind ValidatingWebhookConfiguration POST : create a ValidatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations GET : watch individual changes to a list of ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/{name} DELETE : delete a ValidatingWebhookConfiguration GET : read the specified ValidatingWebhookConfiguration PATCH : partially update the specified ValidatingWebhookConfiguration PUT : replace the specified ValidatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations/{name} GET : watch changes to an object of kind ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 30.3.2.1. /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations Table 30.27. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ValidatingWebhookConfiguration Table 30.28. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 30.29. Body parameters Parameter Type Description body DeleteOptions schema Table 30.30. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ValidatingWebhookConfiguration Table 30.31. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 30.32. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a ValidatingWebhookConfiguration Table 30.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 30.34. Body parameters Parameter Type Description body ValidatingWebhookConfiguration schema Table 30.35. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 201 - Created ValidatingWebhookConfiguration schema 202 - Accepted ValidatingWebhookConfiguration schema 401 - Unauthorized Empty 30.3.2.2. /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations Table 30.36. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 30.37. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 30.3.2.3. /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/{name} Table 30.38. Global path parameters Parameter Type Description name string name of the ValidatingWebhookConfiguration Table 30.39. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ValidatingWebhookConfiguration Table 30.40. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 30.41. Body parameters Parameter Type Description body DeleteOptions schema Table 30.42. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ValidatingWebhookConfiguration Table 30.43. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ValidatingWebhookConfiguration Table 30.44. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 30.45. Body parameters Parameter Type Description body Patch schema Table 30.46. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 201 - Created ValidatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ValidatingWebhookConfiguration Table 30.47. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 30.48. Body parameters Parameter Type Description body ValidatingWebhookConfiguration schema Table 30.49. HTTP responses HTTP code Reponse body 200 - OK ValidatingWebhookConfiguration schema 201 - Created ValidatingWebhookConfiguration schema 401 - Unauthorized Empty 30.3.2.4. /apis/admissionregistration.k8s.io/v1/watch/validatingwebhookconfigurations/{name} Table 30.50. Global path parameters Parameter Type Description name string name of the ValidatingWebhookConfiguration Table 30.51. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ValidatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 30.52. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/webhook-apis-1
Chapter 6. Deploying SR-IOV technologies
Chapter 6. Deploying SR-IOV technologies In your Red Hat OpenStack Platform NFV deployment, you can achieve higher performance with single root I/O virtualization (SR-IOV), when you configure direct access from your instances to a shared PCIe resource through virtual resources. 6.1. Prerequisites For details on how to install and configure the undercloud before deploying the overcloud, see the Director Installation and Usage Guide . Note Do not manually edit any values in /etc/tuned/cpu-partitioning-variables.conf that director heat templates modify. 6.2. Configuring SR-IOV Note The following CPU assignments, memory allocation, and NIC configurations are examples, and might be different from your use case. Generate the built-in ComputeSriov role to define nodes in the OpenStack cluster that run NeutronSriovAgent , NeutronSriovHostConfig , and default compute services. To prepare the SR-IOV containers, include the neutron-sriov.yaml and roles_data.yaml files when you generate the overcloud_images.yaml file. For more information on container image preparation, see Director Installation and Usage . Configure the parameters for the SR-IOV nodes under parameter_defaults appropriately for your cluster, and your hardware configuration. Typically, you add these settings to the network-environment.yaml file. In the same file, configure role specific parameters for SR-IOV compute nodes. Note The numvfs parameter replaces the NeutronSriovNumVFs parameter in the network configuration templates. Red Hat does not support modification of the NeutronSriovNumVFs parameter or the numvfs parameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that physical function (PF). In this case, you must hard reboot these instances to make the SR-IOV PCI device available again. The NovaVcpuPinSet parameter is now deprecated, and is replaced by NovaComputeCpuDedicatedSet for dedicated, pinned workflows. Configure the SR-IOV enabled interfaces in the compute.yaml network configuration template. To create SR-IOV virtual functions (VFs), configure the interfaces as standalone NICs: Ensure that the list of default filters includes the value AggregateInstanceExtraSpecsFilter . Run the overcloud_deploy.sh script. 6.3. NIC partitioning This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . You can configure single root I/O virtualization (SR-IOV) so that an Red Hat OpenStack Platform host can use virtual functions (VFs). When you partition a single, high-speed NIC into multiple VFs, you can use the NIC for both control and data plane traffic. You can then apply a QoS (Quality of Service) priority value to VF interfaces as desired. Procedure Ensure that you complete the following steps when creating the templates for an overcloud deployment: Use the interface type sriov_pf in an os-net-config role file to configure a physical function that the host can use. Note The numvfs parameter replaces the NeutronSriovNumVFs parameter in the network configuration templates. Red Hat does not support modification of the NeutronSriovNumVFs parameter or the numvfs parameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that physical function (PF). In this case, you must hard reboot these instances to make the SR-IOV PCI device available again. Use the interface type sriov_vf to configure virtual functions in a bond that the host can use. The VLAN tag must be unique across all VFs that belong to a common PF device. You must assign VLAN tags to an interface type: linux_bond ovs_bridge ovs_dpdk_port The applicable VF ID range starts at zero, and ends at the maximum number of VFs minus one. To reserve virtual functions for VMs, use the NovaPCIPassthrough parameter. You must assign a regex value to the address parameter to identify the VFs that you want to pass through to Nova, to be used by virtual instances, and not by the host. You can obtain these values from lspci , so, if necessary, boot a compute node into a Linux environment to obtain this information. The lspci command returns the address of each device in the format <bus>:<device>:<slot> . Enter these address values in the NovaPCIPassthrough parameter in the following format: Ensure that IOMMU is enabled on all nodes that require NIC partitioning. For example, if you want NIC Partitioning for compute nodes, enable IOMMU using the KernelArgs parameter for that role: Validation Check the number of VFs. Check Linux bonds. List OVS bonds. Show OVS connections. If you used NovaPCIPassthrough to pass VFs to instances, test by deploying an SR-IOV instance . The following bond modes are supported: balance-slb active-backup 6.4. Configuring OVS hardware offload This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . The procedure for OVS hardware offload configuration shares many of the same steps as configuring SR-IOV. Procedure Generate the ComputeSriov role: Configure the physical_network parameter to match your environment. For VLAN, set the physical_network parameter to the name of the network you create in neutron after deployment. This value should also be in NeutronBridgeMappings . For VXLAN, set the physical_network parameter to the string value null . Ensure the OvsHwOffload parameter under role specific parameters has a value of true . Example: Ensure that the list of default filters includes NUMATopologyFilter : Configure one or more network interfaces intended for hardware offload in the compute-sriov.yaml configuration file: Note Do not use the NeutronSriovNumVFs parameter when configuring Open vSwitch hardware offload. The numvfs parameter specifies the number of VFs in a network configuration file used by os-net-config . Note Do not configure Mellanox network interfaces as a nic-config interface type ovs-vlan because this prevents tunnel endpoints such as VXLAN from passing traffic due to driver limitations. Include the ovs-hw-offload.yaml file in the overcloud deploy command: 6.4.1. Verifying OVS hardware offload Confirm that a PCI device is in switchdev mode: Verify if offload is enabled in OVS: 6.5. Deploying an instance for SR-IOV Use host aggregates to separate high performance compute hosts. For information on creating host aggregates and associated flavors for scheduling see Creating host aggregates . Note Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on the Compute node in the Instances and Images Guide. Deploy an instance for single root I/O virtualization (SR-IOV) by performing the following steps: Create a flavor. Tip You can specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces by adding the extra spec hw:pci_numa_affinity_policy to your flavor. For more information, see Update flavor metadata in the Instance and Images Guide . Create the network. Create the port. Use vnic-type direct to create an SR-IOV virtual function (VF) port. Use the following command to create a virtual function with hardware offload. Use vnic-type direct-physical to create an SR-IOV PF port. Deploy an instance. 6.6. Creating host aggregates For better performance, deploy guests that have cpu pinning and hugepages. You can schedule high performance instances on a subset of hosts by matching aggregate metadata with flavor metadata. Procedure Ensure that the AggregateInstanceExtraSpecsFilter value is included in the scheduler_default_filters parameter in the nova.conf file. This configuration can be set through the heat parameter NovaSchedulerDefaultFilters under role-specific parameters before deployment. Note To add this parameter to the configuration of an exiting cluster, you can add it to the heat templates, and run the original deployment script again. Create an aggregate group for SR-IOV, and add relevant hosts. Define metadata, for example, sriov=true , that matches defined flavor metadata. Create a flavor. Set additional flavor properties. Note that the defined metadata, sriov=true , matches the defined metadata on the SR-IOV aggregate.
[ "openstack overcloud roles generate -o /home/stack/templates/roles_data.yaml Controller ComputeSriov", "sudo openstack tripleo container image prepare --roles-file ~/templates/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml -e ~/containers-prepare-parameter.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml", "NeutronNetworkType: 'vlan' NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''", "ComputeSriovParameters: IsolCpusList: \"1-19,21-39\" KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-19,21-39\" TunedProfileName: \"cpu-partitioning\" NeutronBridgeMappings: - tenant:br-link0 NeutronPhysicalDevMappings: - tenant:p7p1 - tenant:p7p2 NovaPCIPassthrough: - devname: \"p7p1\" physical_network: \"tenant\" - devname: \"p7p2\" physical_network: \"tenant\" NovaComputeCpuDedicatedSet: '1-19,21-39' NovaReservedHostMemory: 4096", "- type: sriov_pf name: p7p3 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false - type: sriov_pf name: p7p4 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false", "NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','AggregateInstanceExtraSpecsFilter']", "- type: sriov_pf name: <interface name> use_dhcp: false numvfs: <number of vfs> promisc: <true/false> #optional (Defaults to true)", "- type: linux_bond name: internal_bond bonding_options: mode=active-backup use_dhcp: false members: - type: sriov_vf device: nic7 vfid: 1 - type: sriov_vf device: nic8 vfid: 1 - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: internal_bond addresses: - ip_netmask: get_param: InternalApiIpSubnet", "NovaPCIPassthrough: - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"06\", \"slot\": \"11\", \"function\": \"[5-7]\"} - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"06\", \"slot\": \"10\", \"function\": \"[5]\"}", "parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"", "cat /sys/class/net/p4p1/device/sriov_numvfs 10 cat /sys/class/net/p4p2/device/sriov_numvfs 10", "cat /proc/net/bonding/intapi_bond Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: p4p1_1 MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: p4p1_1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 16:b4:4c:aa:f0:a8 Slave queue ID: 0 Slave Interface: p4p2_1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: b6:be:82:ac:51:98 Slave queue ID: 0 cat /proc/net/bonding/st_bond Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: p4p1_3 MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: p4p1_3 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 9a:86:b7:cc:17:e4 Slave queue ID: 0 Slave Interface: p4p2_3 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: d6:07:f8:78:dd:5b Slave queue ID: 0", "ovs-appctl bond/show ---- bond_prov ---- bond_mode: active-backup bond may use recirculation: no, Recirc-ID : -1 bond-hash-basis: 0 updelay: 0 ms downdelay: 0 ms lacp_status: off lacp_fallback_ab: false active slave mac: f2:ad:c7:00:f5:c7(dpdk2) slave dpdk2: enabled active slave may_enable: true slave dpdk3: enabled may_enable: true ---- bond_tnt ---- bond_mode: active-backup bond may use recirculation: no, Recirc-ID : -1 bond-hash-basis: 0 updelay: 0 ms downdelay: 0 ms lacp_status: off lacp_fallback_ab: false active slave mac: b2:7e:b8:75:72:e8(dpdk0) slave dpdk0: enabled active slave may_enable: true slave dpdk1: enabled may_enable: true", "ovs-vsctl show cec12069-9d4c-4fa8-bfe4-decfdf258f49 Manager \"ptcp:6640:127.0.0.1\" is_connected: true Bridge br-tenant fail_mode: standalone Port br-tenant Interface br-tenant type: internal Port bond_tnt Interface \"dpdk0\" type: dpdk options: {dpdk-devargs=\"0000:82:02.2\"} Interface \"dpdk1\" type: dpdk options: {dpdk-devargs=\"0000:82:04.2\"} Bridge \"sriov2\" Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure Port \"phy-sriov2\" Interface \"phy-sriov2\" type: patch options: {peer=\"int-sriov2\"} Port \"sriov2\" Interface \"sriov2\" type: internal Bridge br-int Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure Port \"int-sriov2\" Interface \"int-sriov2\" type: patch options: {peer=\"phy-sriov2\"} Port br-int Interface br-int type: internal Port \"vhu93164679-22\" tag: 4 Interface \"vhu93164679-22\" type: dpdkvhostuserclient options: {vhost-server-path=\"/var/lib/vhost_sockets/vhu93164679-22\"} Port \"vhu5d6b9f5a-0d\" tag: 3 Interface \"vhu5d6b9f5a-0d\" type: dpdkvhostuserclient options: {vhost-server-path=\"/var/lib/vhost_sockets/vhu5d6b9f5a-0d\"} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port \"int-sriov1\" Interface \"int-sriov1\" type: patch options: {peer=\"phy-sriov1\"} Port int-br-vfs Interface int-br-vfs type: patch options: {peer=phy-br-vfs} Bridge br-vfs Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure Port phy-br-vfs Interface phy-br-vfs type: patch options: {peer=int-br-vfs} Port bond_prov Interface \"dpdk3\" type: dpdk options: {dpdk-devargs=\"0000:82:04.5\"} Interface \"dpdk2\" type: dpdk options: {dpdk-devargs=\"0000:82:02.5\"} Port br-vfs Interface br-vfs type: internal Bridge \"sriov1\" Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure Port \"sriov1\" Interface \"sriov1\" type: internal Port \"phy-sriov1\" Interface \"phy-sriov1\" type: patch options: {peer=\"int-sriov1\"} Bridge br-tun Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port \"vxlan-0a0a7315\" Interface \"vxlan-0a0a7315\" type: vxlan options: {df_default=\"true\", in_key=flow, local_ip=\"10.10.115.10\", out_key=flow, remote_ip=\"10.10.115.21\"} ovs_version: \"2.10.0\"", "openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov", "parameter_defaults: ComputeSriovParameters: IsolCpusList: 2-9,21-29,11-19,31-39 KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=128 intel_iommu=on iommu=pt\" OvsHwOffload: true TunedProfileName: \"cpu-partitioning\" NeutronBridgeMappings: - tenant:br-tenant NeutronPhysicalDevMappings: - tenant:p7p1 - tenant:p7p2 NovaPCIPassthrough: - devname: \"p7p1\" physical_network: \"null\" - devname: \"p7p2\" physical_network: \"null\" NovaReservedHostMemory: 4096 NovaComputeCpuDedicatedSet: 1-9,21-29,11-19,31-39", "NovaSchedulerDefaultFilters: [\\'RetryFilter',\\'AvailabilityZoneFilter',\\'ComputeFilter',\\'ComputeCapabilitiesFilter',\\'ImagePropertiesFilter',\\'ServerGroupAntiAffinityFilter',\\'ServerGroupAffinityFilter',\\'PciPassthroughFilter',\\'NUMATopologyFilter']", "- type: ovs_bridge name: br-tenant mtu: 9000 members: - type: sriov_pf name: p7p1 numvfs: 5 mtu: 9000 primary: true promisc: true use_dhcp: false link_mode: switchdev", "TEMPLATES_HOME=\"/usr/share/openstack-tripleo-heat-templates\" CUSTOM_TEMPLATES=\"/home/stack/templates\" openstack overcloud deploy --templates -r USD{CUSTOM_TEMPLATES}/roles_data.yaml -e USD{TEMPLATES_HOME}/environments/ovs-hw-offload.yaml -e USD{CUSTOM_TEMPLATES}/network-environment.yaml -e USD{CUSTOM_TEMPLATES}/neutron-ovs.yaml", "devlink dev eswitch show pci/0000:03:00.0 pci/0000:03:00.0: mode switchdev inline-mode none encap enable", "ovs-vsctl get Open_vSwitch . other_config:hw-offload \"true\"", "openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>", "openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID> openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp", "openstack port create --network net1 --vnic-type direct sriov_port", "openstack port create --network net1 --vnic-type direct --binding-profile '{\"capabilities\": [\"switchdev\"]} sriov_hwoffload_port", "openstack port create --network net1 --vnic-type direct-physical sriov_port", "openstack server create --flavor <flavor> --image <image> --nic port-id=<id> <instance name>", "ComputeOvsDpdkSriovParameters: NovaSchedulerDefaultFilters: ['AggregateInstanceExtraSpecsFilter', 'RetryFilter','AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']", "openstack aggregate create sriov_group openstack aggregate add host sriov_group compute-sriov-0.localdomain openstack aggregate set --property sriov=true sriov_group", "openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>", "openstack flavor set --property aggregate_instance_extra_specs:sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/network_functions_virtualization_planning_and_configuration_guide/part-sriov-nfv-configuration
Chapter 2. Architecture of OpenShift Data Foundation
Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see OpenShift Container Platform - Installation process . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. In order for Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly pre-existing. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/planning_your_deployment/odf-architecture_rhodf
Chapter 7. Clair security scanner
Chapter 7. Clair security scanner 7.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping . 7.1.1. Information about Open Source Vulnerability (OSV) database for Clair Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software. OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems . Clair also reports vulnerability and security information for golang , java , and ruby ecosystems through the Open Source Vulnerability (OSV) database. By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects. For more information about OSV, see the OSV website . 7.2. Setting up Clair on standalone Red Hat Quay deployments For standalone Red Hat Quay deployments, you can set up Clair manually. Procedure In your Red Hat Quay installation directory, create a new directory for the Clair database data: USD mkdir /home/<user-name>/quay-poc/postgres-clairv4 Set the appropriate permissions for the postgres-clairv4 file by entering the following command: USD setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4 Deploy a Clair PostgreSQL database by entering the following command: USD sudo podman run -d --name postgresql-clairv4 \ -e POSTGRESQL_USER=clairuser \ -e POSTGRESQL_PASSWORD=clairpass \ -e POSTGRESQL_DATABASE=clair \ -e POSTGRESQL_ADMIN_PASSWORD=adminpass \ -p 5433:5432 \ -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z \ registry.redhat.io/rhel8/postgresql-15 Install the PostgreSQL uuid-ossp module for your Clair deployment: USD sudo podman exec -it postgresql-clairv4 /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"" | psql -d clair -U postgres' Example output CREATE EXTENSION Note Clair requires the uuid-ossp extension to be added to its PostgreSQL database. For users with proper privileges, creating the extension will automatically be added by Clair. If users do not have the proper privileges, the extension must be added before start Clair. If the extension is not present, the following error will be displayed when Clair attempts to start: ERROR: Please load the "uuid-ossp" extension. (SQLSTATE 42501) . Stop the Quay container if it is running and restart it in configuration mode, loading the existing configuration as a volume: Log in to the configuration tool and click Enable Security Scanning in the Security Scanner section of the UI. Set the HTTP endpoint for Clair using a port that is not already in use on the quay-server system, for example, 8081 . Create a pre-shared key (PSK) using the Generate PSK button. Security Scanner UI Validate and download the config.yaml file for Red Hat Quay, and then stop the Quay container that is running the configuration editor. Extract the new configuration bundle into your Red Hat Quay installation directory, for example: USD tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/ Create a folder for your Clair configuration file, for example: USD mkdir /etc/opt/clairv4/config/ Change into the Clair configuration folder: USD cd /etc/opt/clairv4/config/ Create a Clair configuration file, for example: http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: "MTU5YzA4Y2ZkNzJoMQ==" iss: ["quay"] # tracing and metrics trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" metrics: name: "prometheus" For more information about Clair's configuration format, see Clair configuration reference . Start Clair by using the container image, mounting in the configuration from the file you created: Note Running multiple Clair containers is also possible, but for deployment scenarios beyond a single container the use of a container orchestrator like Kubernetes or OpenShift Container Platform is strongly recommended. 7.3. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator installs or upgrades a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. 7.4. Testing Clair Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment. Prerequisites You have deployed the Clair container image. Procedure Pull a sample image by entering the following command: USD podman pull ubuntu:20.04 Tag the image to your registry by entering the following command: USD sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04 Push the image to your Red Hat Quay registry by entering the following command: USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04 Log in to your Red Hat Quay deployment through the UI. Click the repository name, for example, quayadmin/ubuntu . In the navigation pane, click Tags . Report summary Click the image report, for example, 45 medium , to show a more detailed report: Report details Note In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16 . This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug.
[ "mkdir /home/<user-name>/quay-poc/postgres-clairv4", "setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4", "sudo podman run -d --name postgresql-clairv4 -e POSTGRESQL_USER=clairuser -e POSTGRESQL_PASSWORD=clairpass -e POSTGRESQL_DATABASE=clair -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5433:5432 -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-15", "sudo podman exec -it postgresql-clairv4 /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS \\\"uuid-ossp\\\"\" | psql -d clair -U postgres'", "CREATE EXTENSION", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret", "tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/", "mkdir /etc/opt/clairv4/config/", "cd /etc/opt/clairv4/config/", "http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: \"MTU5YzA4Y2ZkNzJoMQ==\" iss: [\"quay\"] tracing and metrics trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\" metrics: name: \"prometheus\"", "sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3", "podman pull ubuntu:20.04", "sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/clair-vulnerability-scanner
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1]
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the dnsRecord. status object status is the most recently observed status of the dnsRecord. 11.1.1. .spec Description spec is the specification of the desired behavior of the dnsRecord. Type object Required dnsManagementPolicy dnsName recordTTL recordType targets Property Type Description dnsManagementPolicy string dnsManagementPolicy denotes the current policy applied on the DNS record. Records that have policy set as "Unmanaged" are ignored by the ingress operator. This means that the DNS record on the cloud provider is not managed by the operator, and the "Published" status condition will be updated to "Unknown" status, since it is externally managed. Any existing record on the cloud provider can be deleted at the discretion of the cluster admin. This field defaults to Managed. Valid values are "Managed" and "Unmanaged". dnsName string dnsName is the hostname of the DNS record recordTTL integer recordTTL is the record TTL in seconds. If zero, the default is 30. RecordTTL will not be used in AWS regions Alias targets, but will be used in CNAME targets, per AWS API contract. recordType string recordType is the DNS record type. For example, "A" or "CNAME". targets array (string) targets are record targets. 11.1.2. .status Description status is the most recently observed status of the dnsRecord. Type object Property Type Description observedGeneration integer observedGeneration is the most recently observed generation of the DNSRecord. When the DNSRecord is updated, the controller updates the corresponding record in each managed zone. If an update for a particular zone fails, that failure is recorded in the status condition for the zone so that the controller can determine that it needs to retry the update for that specific zone. zones array zones are the status of the record in each zone. zones[] object DNSZoneStatus is the status of a record within a specific zone. 11.1.3. .status.zones Description zones are the status of the record in each zone. Type array 11.1.4. .status.zones[] Description DNSZoneStatus is the status of a record within a specific zone. Type object Property Type Description conditions array conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. conditions[] object DNSZoneCondition is just the standard condition fields. dnsZone object dnsZone is the zone where the record is published. 11.1.5. .status.zones[].conditions Description conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. Type array 11.1.6. .status.zones[].conditions[] Description DNSZoneCondition is just the standard condition fields. Type object Required status type Property Type Description lastTransitionTime string message string reason string status string type string 11.1.7. .status.zones[].dnsZone Description dnsZone is the zone where the record is published. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 11.2. API endpoints The following API endpoints are available: /apis/ingress.operator.openshift.io/v1/dnsrecords GET : list objects of kind DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords DELETE : delete collection of DNSRecord GET : list objects of kind DNSRecord POST : create a DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} DELETE : delete a DNSRecord GET : read the specified DNSRecord PATCH : partially update the specified DNSRecord PUT : replace the specified DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status GET : read status of the specified DNSRecord PATCH : partially update status of the specified DNSRecord PUT : replace status of the specified DNSRecord 11.2.1. /apis/ingress.operator.openshift.io/v1/dnsrecords Table 11.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind DNSRecord Table 11.2. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty 11.2.2. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords Table 11.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 11.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DNSRecord Table 11.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNSRecord Table 11.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.8. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty HTTP method POST Description create a DNSRecord Table 11.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.10. Body parameters Parameter Type Description body DNSRecord schema Table 11.11. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 202 - Accepted DNSRecord schema 401 - Unauthorized Empty 11.2.3. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} Table 11.12. Global path parameters Parameter Type Description name string name of the DNSRecord namespace string object name and auth scope, such as for teams and projects Table 11.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DNSRecord Table 11.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.15. Body parameters Parameter Type Description body DeleteOptions schema Table 11.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNSRecord Table 11.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.18. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNSRecord Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 11.20. Body parameters Parameter Type Description body Patch schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNSRecord Table 11.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.23. Body parameters Parameter Type Description body DNSRecord schema Table 11.24. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty 11.2.4. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status Table 11.25. Global path parameters Parameter Type Description name string name of the DNSRecord namespace string object name and auth scope, such as for teams and projects Table 11.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DNSRecord Table 11.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.28. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNSRecord Table 11.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 11.30. Body parameters Parameter Type Description body Patch schema Table 11.31. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNSRecord Table 11.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.33. Body parameters Parameter Type Description body DNSRecord schema Table 11.34. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/dnsrecord-ingress-operator-openshift-io-v1
Chapter 8. Configuring automatic upgrades for secured clusters
Chapter 8. Configuring automatic upgrades for secured clusters You can automate the upgrade process for each secured cluster and view the upgrade status from the RHACS portal. Automatic upgrades make it easier to stay up-to-date by automating the manual task of upgrading each secured cluster. With automatic upgrades, after you upgrade Central; Sensor, Collector, and Compliance services in all secured clusters, automatically upgrade to the latest version. Red Hat Advanced Cluster Security for Kubernetes also enables centralized management of all your secured clusters from within the RHACS portal. The new Clusters view displays information about all your secured clusters, the Sensor version for every cluster, and upgrade status messages. You can also use this view to selectively upgrade your secured clusters or change their configuration. Note The automatic upgrade feature is enabled by default. If you are using a private image registry, you must first push the Sensor and Collector images to your private registry. The Sensor must run with the default RBAC permissions. Automatic upgrades do not preserve any patches that you have made to any Red Hat Advanced Cluster Security for Kubernetes services running in your cluster. However, it preserves all labels and annotations that you have added to any Red Hat Advanced Cluster Security for Kubernetes object. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you remove this account, Sensor does not have enough permissions, and you must complete future upgrades manually. 8.1. Enabling automatic upgrades You can enable automatic upgrades for all secured clusters to automatically upgrade Collector and Compliance services in all secured clusters to the latest version. Procedure In the RHACS portal, go to Platform Configuration Clusters . Turn on the Automatically upgrade secured clusters toggle. Note For new installations, the Automatically upgrade secured clusters toggle is enabled by default. 8.2. Disabling automatic upgrades If you want to manage your secured cluster upgrades manually, you can disable automatic upgrades. Procedure In the RHACS portal, go to Platform Configuration Clusters . Turn off the Automatically upgrade secured clusters toggle. Note For new installations, the Automatically upgrade secured clusters toggle is enabled by default. 8.3. Automatic upgrade status The Clusters view lists all clusters and their upgrade statuses. Upgrade status Description Up to date with Central version The secured cluster is running the same version as Central. Upgrade available A new version is available for the Sensor and Collector. Upgrade failed. Retry upgrade. The automatic upgrade failed. Manual upgrade required The Sensor and Collector version is older than version 2.5.29.0. You must manually upgrade your secured clusters. Pre-flight checks complete The upgrade is in progress. Before performing automatic upgrade, the upgrade installer runs a pre-flight check. During the pre-flight check, the installer verifies if certain conditions are satisfied and then only starts the upgrade process. 8.4. Automatic upgrade failure Sometimes, Red Hat Advanced Cluster Security for Kubernetes automatic upgrades might fail to install. When an upgrade fails, the status message for the secured cluster changes to Upgrade failed. Retry upgrade . To view more information about the failure and understand why the upgrade failed, you can check the secured cluster row in the Clusters view. Some common reasons for the failure are: The sensor-upgrader deployment might not have run because of a missing or a non-schedulable image. The pre-flight checks may have failed, either because of insufficient RBAC permissions or because the cluster state is not recognizable. This can happen if you have edited Red Hat Advanced Cluster Security for Kubernetes service configurations or the auto-upgrade.stackrox.io/component label is missing. There might be errors in executing the upgrade. If this happens, the upgrade installer automatically attempts to roll back the upgrade. Note Sometimes, the rollback can fail as well. For such cases view the cluster logs to identify the issue or contact support. After you identify and fix the root cause for the upgrade failure, you can use the Retry Upgrade option to upgrade your secured cluster. 8.5. Upgrading secured clusters manually from the RHACS portal If you do not want to enable automatic upgrades, you can manage your secured cluster upgrades by using the Clusters view. To manually trigger upgrades for your secured clusters: Procedure In the RHACS portal, go to Platform Configuration Clusters . Select the Upgrade available option under the Upgrade status column for the cluster you want to upgrade. To upgrade multiple clusters at once, select the checkboxes in the Cluster column for the clusters you want to update. Click Upgrade .
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/configuring/configure-automatic-upgrades
10.3. Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later)
10.3. Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later) As of Red Hat Enterprise Linux 7.3, you can modify general quorum options for your cluster with the pcs quorum update command. Executing this command requires that the cluster be stopped. For information on the quorum options, see the votequorum (5) man page. The format of the pcs quorum update command is as follows. The following series of commands modifies the wait_for_all quorum option and displays the updated status of the option. Note that the system does not allow you to execute this command while the cluster is running.
[ "pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[ time-in-ms ] [wait_for_all=[0|1]]", "pcs quorum update wait_for_all=1 Checking corosync is not running on nodes Error: node1: corosync is running Error: node2: corosync is running pcs cluster stop --all node2: Stopping Cluster (pacemaker) node1: Stopping Cluster (pacemaker) node1: Stopping Cluster (corosync) node2: Stopping Cluster (corosync) pcs quorum update wait_for_all=1 Checking corosync is not running on nodes node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded pcs quorum config Options: wait_for_all: 1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-quorumoptmodify-haar
Chapter 15. DNS Servers
Chapter 15. DNS Servers DNS (Domain Name System), is a distributed database system that is used to associate host names with their respective IP addresses. For users, this has the advantage that they can refer to machines on the network by names that are usually easier to remember than the numerical network addresses. For system administrators, using a DNS server, also known as a name server , enables changing the IP address for a host without ever affecting the name-based queries. The use of the DNS databases is not only for resolving IP addresses to domain names and their use is becoming broader and broader as DNSSEC is deployed. 15.1. Introduction to DNS DNS is usually implemented using one or more centralized servers that are authoritative for certain domains. When a client host requests information from a name server, it usually connects to port 53. The name server then attempts to resolve the name requested. If the name server is configured to be a recursive name servers and it does not have an authoritative answer, or does not already have the answer cached from an earlier query, it queries other name servers, called root name servers , to determine which name servers are authoritative for the name in question, and then queries them to get the requested name. Name servers configured as purely authoritative, with recursion disabled, will not do lookups on behalf of clients. 15.1.1. Name server Zones In a DNS server, all information is stored in basic data elements called resource records ( RR ). Resource records are defined in RFC 1034 . The domain names are organized into a tree structure. Each level of the hierarchy is divided by a period ( . ). For example: The root domain, denoted by . , is the root of the DNS tree, which is at level zero. The domain name com , referred to as the top-level domain ( TLD ) is a child of the root domain ( . ) so it is the first level of the hierarchy. The domain name example.com is at the second level of the hierarchy. Example 15.1. A Simple Resource Record An example of a simple resource record ( RR ): The domain name, example.com , is the owner for the RR. The value 86400 is the time to live ( TTL ). The letters IN , meaning " the Internet system " , indicate the class of the RR. The letter A indicates the type of RR (in this example, a host address). The host address 192.0.2.1 is the data contained in the final section of this RR. This one line example is a RR. A set of RRs with the same type, owner, and class is called a resource record set ( RRSet ). Zones are defined on authoritative name servers through the use of zone files , which contain definitions of the resource records in each zone. Zone files are stored on primary name servers (also called master name servers ), where changes are made to the files, and secondary name servers (also called slave name servers ), which receive zone definitions from the primary name servers. Both primary and secondary name servers are authoritative for the zone and look the same to clients. Depending on the configuration, any name server can also serve as a primary or secondary server for multiple zones at the same time. Note that administrators of DNS and DHCP servers, as well as any provisioning applications, should agree on the host name format used in an organization. See Section 6.1.1, "Recommended Naming Practices" for more information on the format of host names. 15.1.2. Name server Types There are two name server configuration types: authoritative Authoritative name servers answer to resource records that are part of their zones only. This category includes both primary (master) and secondary (slave) name servers. recursive Recursive name servers offer resolution services, but they are not authoritative for any zone. Answers for all resolutions are cached in a memory for a fixed period of time, which is specified by the retrieved resource record. Although a name server can be both authoritative and recursive at the same time, it is recommended not to combine the configuration types. To be able to perform their work, authoritative servers should be available to all clients all the time. On the other hand, since the recursive lookup takes far more time than authoritative responses, recursive servers should be available to a restricted number of clients only, otherwise they are prone to distributed denial of service (DDoS) attacks. 15.1.3. BIND as a Name server BIND consists of a set of DNS-related programs. It contains a name server called named , an administration utility called rndc , and a debugging tool called dig . See Red Hat Enterprise Linux System Administrator's Guide for more information on how to run a service in Red Hat Enterprise Linux.
[ "example.com. 86400 IN A 192.0.2.1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-dns_servers
12.8. Using an NPIV Virtual Adapter (vHBA) with SCSI Devices
12.8. Using an NPIV Virtual Adapter (vHBA) with SCSI Devices NPIV (N_Port ID Virtualization) is a software technology that allows sharing of a single physical Fibre Channel host bus adapter (HBA). This allows multiple guests to see the same storage from multiple physical hosts, and thus allows for easier migration paths for the storage. As a result, there is no need for the migration to create or copy storage, as long as the correct storage path is specified. In virtualization, the virtual host bus adapter , or vHBA , controls the LUNs for virtual machines. Each vHBA is identified by its own WWNN (World Wide Node Name) and WWPN (World Wide Port Name). The path to the storage is determined by the WWNN and WWPN values. This section provides instructions for configuring a vHBA on a virtual machine. Note that Red Hat Enterprise Linux 6 does not support persistent vHBA configuration across host reboots; verify any vHBA-related settings following a host reboot. 12.8.1. Creating a vHBA Procedure 12.6. Creating a vHBA Locate HBAs on the host system To locate the HBAs on your host system, examine the SCSI devices on the host system to locate a scsi_host with vport capability. Run the following command to retrieve a scsi_host list: For each scsi_host , run the following command to examine the device XML for the line <capability type='vport_ops'> , which indicates a scsi_host with vport capability. Check the HBA's details Use the virsh nodedev-dumpxml HBA_device command to see the HBA's details. The XML output from the virsh nodedev-dumpxml command will list the fields <name> , <wwnn> , and <wwpn> , which are used to create a vHBA. The <max_vports> value shows the maximum number of supported vHBAs. # virsh nodedev-dumpxml scsi_host3 <device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device> In this example, the <max_vports> value shows there are a total 127 virtual ports available for use in the HBA configuration. The <vports> value shows the number of virtual ports currently being used. These values update after creating a vHBA. Create a vHBA host device Create an XML file similar to the following (in this example, named vhba_host3.xml ) for the vHBA host. # cat vhba_host3.xml <device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device> The <parent> field specifies the HBA device to associate with this vHBA device. The details in the <device> tag are used in the step to create a new vHBA device for the host. See http://libvirt.org/formatnode.html for more information on the nodedev XML format. Create a new vHBA on the vHBA host device To create a vHBA on vhba_host3 , use the virsh nodedev-create command: Verify the vHBA Verify the new vHBA's details ( scsi_host5 ) with the virsh nodedev-dumpxml command: # virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device>
[ "virsh nodedev-list --cap scsi_host scsi_host0 scsi_host1 scsi_host2 scsi_host3 scsi_host4", "virsh nodedev-dumpxml scsi_hostN", "virsh nodedev-dumpxml scsi_host3 <device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device>", "cat vhba_host3.xml <device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>", "virsh nodedev-create vhba_host3.xml Node device scsi_host5 created from vhba_host3.xml", "virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-NPIV_storage
14.7. Samba with CUPS Printing Support
14.7. Samba with CUPS Printing Support Samba allows client machines to share printers connected to the Samba server, as well as send Linux documents to Windows printer shares. Although there are other printing systems that function with Red Hat Enterprise Linux, CUPS (Common UNIX Print System) is the recommended printing system due to its close integration with Samba. 14.7.1. Simple smb.conf Settings The following example shows a very basic smb.conf configuration for CUPS support: More complicated printing configurations are possible. To add additional security and privacy for printing confidential documents, users can have their own print spooler not located in a public path. If a job fails, other users would not have access to the file. The printUSD share contains printer drivers for clients to access if not available locally. The printUSD share is optional and may not be required depending on the organization. Setting browseable to Yes enables the printer to be viewed in the Windows Network Neighborhood, provided the Samba server is set up correctly in the domain/workgroup.
[ "[global] load printers = Yes printing = cups printcap name = cups [printers] comment = All Printers path = /var/spool/samba/print printer = IBMInfoP browseable = No public = Yes guest ok = Yes writable = No printable = Yes printer admin = @ntadmins [printUSD] comment = Printer Drivers Share path = /var/lib/samba/drivers write list = ed, john printer admin = ed, john" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-cups
B.17.2.3. Opening and ending tag mismatch
B.17.2.3. Opening and ending tag mismatch Symptom The following error occurs: Investigation The error message above contains three clues to identify the offending tag: The message following the last colon, clock line 16 and domain , reveals that <clock> contains a mismatched tag on line 16 of the document. The last hint is the pointer in the context part of the message, which identifies the second offending tag. Unpaired tags must be closed with /> . The following snippet does not follow this rule and has produced the error message shown above: This error is caused by mismatched XML tags in the file. Every XML tag must have a matching start and end tag. Other examples of mismatched XML tags The following examples produce similar error messages and show variations of mismatched XML tags. This snippet contains an unended pair tag for <features> : This snippet contains an end tag ( </name> ) without a corresponding start tag: Solution Ensure all XML tags start and end correctly.
[ "error: ( name_of_guest.xml ):61: Opening and ending tag mismatch: clock line 16 and domain </domain> ---------^", "<domain type='kvm'> <clock offset='utc'>", "<domain type='kvm'> <features> <acpi/> <pae/> </domain>", "<domain type='kvm'> </name> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sec-app_xml_errors-tag_mismatch
A.4. Explanation of Settings in the New Virtual Disk and Edit Virtual Disk Windows
A.4. Explanation of Settings in the New Virtual Disk and Edit Virtual Disk Windows Note The following tables do not include information on whether a power cycle is required because that information is not applicable to these scenarios. Table A.21. New Virtual Disk and Edit Virtual Disk Settings: Image Field Name Description Size(GB) The size of the new virtual disk in GB. Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. Interface The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center The data center in which the virtual disk will be available. Storage Domain The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain. Allocation Policy The provisioning policy for the new virtual disk. Preallocated allocates the entire size of the disk on the storage domain at the time the virtual disk is created. The virtual size and the actual size of a preallocated disk are the same. Preallocated virtual disks take more time to create than thin provisioned virtual disks, but have better read and write performance. Preallocated virtual disks are recommended for servers and other I/O intensive virtual machines. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible. Thin Provision allocates 1 GB at the time the virtual disk is created and sets a maximum limit on the size to which the disk can grow. The virtual size of the disk is the maximum limit; the actual size of the disk is the space that has been allocated so far. Thin provisioned disks are faster to create than preallocated disks and allow for storage over-commitment. Thin provisioned virtual disks are recommended for desktops. Disk Profile The disk profile assigned to the virtual disk. Disk profiles define the maximum amount of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are defined on the storage domain level based on storage quality of service entries created for data centers. Activate Disk(s) Activate the virtual disk immediately after creation. This option is not available when creating a floating disk. Wipe After Delete Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted. Bootable Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read Only Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. This option is not available when creating a floating disk. Enable Discard Allows you to shrink a thin provisioned disk while the virtual machine is up. For block storage, the underlying storage device must support discard calls, and the option cannot be used with Wipe After Delete unless the underlying storage supports the discard_zeroes_data property. For file storage, the underlying file system and the block device must support discard calls. If all requirements are met, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. The Direct LUN settings can be displayed in either Targets > LUNs or LUNs > Targets . Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs. Table A.22. New Virtual Disk and Edit Virtual Disk Settings: Direct LUN Field Name Description Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. By default the last 4 characters of the LUN ID is inserted into the field. The default behavior can be configured by setting the PopulateDirectLUNDiskDescriptionWithLUNId configuration key to the appropriate value using the engine-config command. The configuration key can be set to -1 for the full LUN ID to be used, or 0 for this feature to be ignored. A positive integer populates the description with the corresponding number of characters of the LUN ID. Interface The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center The data center in which the virtual disk will be available. Host The host on which the LUN will be mounted. You can select any host in the data center. Storage Type The type of external LUN to add. You can select from either iSCSI or Fibre Channel . Discover Targets This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected. Address - The host name or IP address of the target server. Port - The port by which to attempt a connection to the target server. The default port is 3260. User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs. CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. Activate Disk(s) Activate the virtual disk immediately after creation. This option is not available when creating a floating disk. Bootable Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read Only Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. This option is not available when creating a floating disk. Enable Discard Allows you to shrink a thin provisioned disk while the virtual machine is up. With this option enabled, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. Enable SCSI Pass-Through Available when the Interface is set to VirtIO-SCSI . Selecting this check box enables passthrough of a physical SCSI device to the virtual disk. A VirtIO-SCSI interface with SCSI passthrough enabled automatically includes SCSI discard support. Read Only is not supported when this check box is selected. When this check box is not selected, the virtual disk uses an emulated SCSI device. Read Only is supported on emulated VirtIO-SCSI disks. Allow Privileged SCSI I/O Available when the Enable SCSI Pass-Through check box is selected. Selecting this check box enables unfiltered SCSI Generic I/O (SG_IO) access, allowing privileged SG_IO commands on the disk. This is required for persistent reservations. Using SCSI Reservation Available when the Enable SCSI Pass-Through and Allow Privileged SCSI I/O check boxes are selected. Selecting this check box disables migration for any virtual machine using this disk, to prevent virtual machines that are using SCSI reservation from losing access to the disk. Fill in the fields in the Discover Targets section and click Discover to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons to each LUN, select the LUN to add. Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data. The following considerations must be made when using a direct LUN as a virtual machine hard disk image: Live storage migration of direct LUN hard disk images is not supported. Direct LUN disks are not included in virtual machine exports. Direct LUN disks are not included in virtual machine snapshots. The Cinder settings form will be disabled if there are no available OpenStack Volume storage domains on which you have permissions to create a disk in the relevant Data Center. Cinder disks require access to an instance of OpenStack Volume that has been added to the Red Hat Virtualization environment using the External Providers window; see Adding an OpenStack Volume (Cinder) Instance for Storage Management for more information. Table A.23. New Virtual Disk and Edit Virtual Disk Settings: Cinder Field Name Description Size(GB) The size of the new virtual disk in GB. Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. Interface The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center The data center in which the virtual disk will be available. Storage Domain The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain. Volume Type The volume type of the virtual disk. The drop-down list shows all available volume types. The volume type will be managed and configured on OpenStack Cinder. Activate Disk(s) Activate the virtual disk immediately after creation. This option is not available when creating a floating disk. Bootable Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read Only Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. This option is not available when creating a floating disk. Important Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual disks that contain such file systems (e.g. EXT3 , EXT4 , or XFS ).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/add_virtual_disk_dialogue_entries
Chapter 5. Configuring Streams for Apache Kafka
Chapter 5. Configuring Streams for Apache Kafka Use the Kafka configuration properties files to configure Streams for Apache Kafka. The properties file is in Java format, with each property on a separate line in the following format: Lines starting with # or ! are treated as comments and are ignored by Streams for Apache Kafka components. Values can be split into multiple lines by using \ directly before the newline/carriage return. After saving the changes in the properties file, you need to restart the Kafka node. In a multi-node environment, repeat the process on each node in the cluster. 5.1. Using standard Kafka configuration properties Use standard Kafka configuration properties to configure Kafka components. The properties provide options to control and tune the configuration of the following Kafka components: Brokers Topics Producer, consumer, and management clients Kafka Connect Kafka Streams Broker and client parameters include options to configure authorization, authentication and encryption. For further information on Kafka configuration properties and how to use the properties to tune your deployment, see the following guides: Kafka configuration properties Kafka configuration tuning 5.2. Loading configuration values from environment variables Use the Environment Variables Configuration Provider plugin to load configuration data from environment variables. You can use the Environment Variables Configuration Provider, for example, to load certificates or JAAS configuration from environment variables. You can use the provider to load configuration data for all Kafka components, including producers and consumers. Use the provider, for example, to provide the credentials for Kafka Connect connector configuration. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. The Environment Variables Configuration Provider JAR file. The JAR file is available from the Streams for Apache Kafka archive . Procedure Add the Environment Variables Configuration Provider JAR file to the Kafka libs directory. Initialize the Environment Variables Configuration Provider in the configuration properties file of the Kafka component. For example, to initialize the provider for Kafka, add the configuration to the server.properties file. Configuration to enable the Environment Variables Configuration Provider config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider Add configuration to the properties file to load data from environment variables. Configuration to load data from an environment variable option=USD{env: <MY_ENV_VAR_NAME> } Use capitalized or upper-case environment variable naming conventions, such as MY_ENV_VAR_NAME . Save the changes. Restart the Kafka component. For information on restarting brokers in a multi-node cluster, see Section 3.6, "Performing a graceful rolling restart of Kafka brokers" . 5.3. Configuring Kafka Kafka uses properties files to store static configuration. The recommended location for the configuration files is /opt/kafka/config/kraft/ . The configuration files have to be readable by the kafka user. Streams for Apache Kafka ships example configuration files that highlight various basic and advanced features of the product. They can be found under config/kraft/ in the Streams for Apache Kafka installation directory as follows: (default) config/kraft/server.properties for nodes running in combined mode config/kraft/broker.properties for nodes running as brokers config/kraft/controller.properties for nodes running as controllers This chapter explains the most important configuration options. 5.3.1. Listeners Listeners are used to connect to Kafka brokers. Each Kafka broker can be configured to use multiple listeners. Each listener requires a different configuration so it can listen on a different port or network interface. To configure listeners, edit the listeners property in the Kafka configuration properties file. Add listeners to the listeners property as a comma-separated list. Configure each property as follows: If <hostname> is empty, Kafka uses the java.net.InetAddress.getCanonicalHostName() class as the hostname. Example configuration for multiple listeners listeners=internal-1://:9092,internal-2://:9093,replication://:9094 When a Kafka client wants to connect to a Kafka cluster, it first connects to the bootstrap server , which is one of the cluster nodes. The bootstrap server provides the client with a list of all the brokers in the cluster, and the client connects to each one individually. The list of brokers is based on the configured listeners . Advertised listeners Optionally, you can use the advertised.listeners property to provide the client with a different set of listener addresses than those given in the listeners property. This is useful if additional network infrastructure, such as a proxy, is between the client and the broker, or an external DNS name is being used instead of an IP address. The advertised.listeners property is formatted in the same way as the listeners property. Example configuration for advertised listeners listeners=internal-1://:9092,internal-2://:9093 advertised.listeners=internal-1://my-broker-1.my-domain.com:1234,internal-2://my-broker-1.my-domain.com:1235 Note The names of the advertised listeners must match those listed in the listeners property. Inter-broker listeners Inter-broker listeners are used for communication between Kafka brokers. Inter-broker communication is required for: Coordinating workloads between different brokers Replicating messages between partitions stored on different brokers The inter-broker listener can be assigned to a port of your choice. When multiple listeners are configured, you can define the name of the inter-broker listener in the inter.broker.listener.name property of your broker configuration. Here, the inter-broker listener is named as REPLICATION : listeners=REPLICATION://0.0.0.0:9091 inter.broker.listener.name=REPLICATION Controller listeners Controller configuration is used to connect and communicate with the controller that coordinates the cluster and manages the metadata used to track the status of brokers and partitions. By default, communication between the controllers and brokers uses a dedicated controller listener. Controllers are responsible for coordinating administrative tasks, such as partition leadership changes, so one or more of these listeners is required. Specify listeners to use for controllers using the controller.listener.names property. You can specify a quorum of controller voters using the controller.quorum.voters property. The quorum enables a leader-follower structure for administrative tasks, with the leader actively managing operations and followers as hot standbys, ensuring metadata consistency in memory and facilitating failover. listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER controller.quorum.voters=1@localhost:9090 The format for the controller voters is <cluster_id>@<hostname>:<port> . 5.3.2. Commit logs Apache Kafka stores all records it receives from producers in commit logs. The commit logs contain the actual data, in the form of records, that Kafka needs to deliver. Note that these records differ from application log files, which detail the broker's activities. Log directories You can configure log directories using the log.dirs property file to store commit logs in one or multiple log directories. It should be set to /var/lib/kafka directory created during installation: For performance reasons, you can configure log.dirs to multiple directories and place each of them on a different physical device to improve disk I/O performance. For example: 5.3.3. Node ID Node ID is a unique identifier for each node (broker or controller) in the cluster. You can assign an integer greater than or equal to 0 as node ID. The node ID is used to identify the nodes after restarts or crashes and it is therefore important that the ID is stable and does not change over time. The node ID is configured in the Kafka configuration properties file:
[ "<option> = <value>", "This is a comment", "sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"bob\" password=\"bobs-password\";", "config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider", "option=USD{env: <MY_ENV_VAR_NAME> }", "<listener_name>://<hostname>:<port>", "listeners=internal-1://:9092,internal-2://:9093,replication://:9094", "listeners=internal-1://:9092,internal-2://:9093 advertised.listeners=internal-1://my-broker-1.my-domain.com:1234,internal-2://my-broker-1.my-domain.com:1235", "listeners=REPLICATION://0.0.0.0:9091 inter.broker.listener.name=REPLICATION", "listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER controller.quorum.voters=1@localhost:9090", "log.dirs=/var/lib/kafka", "log.dirs=/var/lib/kafka1,/var/lib/kafka2,/var/lib/kafka3", "node.id=1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-configuring-amq-streams-str
Chapter 4. CSIStorageCapacity [storage.k8s.io/v1]
Chapter 4. CSIStorageCapacity [storage.k8s.io/v1] Description CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes. For example this can express things like: - StorageClass "standard" has "1234 GiB" available in "topology.kubernetes.io/zone=us-east1" - StorageClass "localssd" has "10 GiB" available in "kubernetes.io/hostname=knode-abc123" The following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero The producer of these objects can decide which approach is more suitable. They are consumed by the kube-scheduler when a CSI driver opts into capacity-aware scheduling with CSIDriverSpec.StorageCapacity. The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes. If MaximumVolumeSize is unset, it falls back to a comparison against the less precise Capacity. If that is also unset, the scheduler assumes that capacity is insufficient and tries some other node. Type object Required storageClassName 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources capacity Quantity capacity is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. The semantic is currently (CSI spec 1.2) defined as: The available capacity, in bytes, of the storage that can be used to provision volumes. If not set, that information is currently unavailable. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds maximumVolumeSize Quantity maximumVolumeSize is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. This is defined since CSI spec 1.4.0 as the largest size that may be used in a CreateVolumeRequest.capacity_range.required_bytes field to create a volume with the same parameters as those in GetCapacityRequest. The corresponding value in the Kubernetes API is ResourceRequirements.Requests in a volume claim. metadata ObjectMeta Standard object's metadata. The name has no particular meaning. It must be a DNS subdomain (dots allowed, 253 characters). To ensure that there are no conflicts with other CSI drivers on the cluster, the recommendation is to use csisc-<uuid>, a generated name, or a reverse-domain name which ends with the unique CSI driver name. Objects are namespaced. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata nodeTopology LabelSelector nodeTopology defines which nodes have access to the storage for which capacity was reported. If not set, the storage is not accessible from any node in the cluster. If empty, the storage is accessible from all nodes. This field is immutable. storageClassName string storageClassName represents the name of the StorageClass that the reported capacity applies to. It must meet the same requirements as the name of a StorageClass object (non-empty, DNS subdomain). If that object no longer exists, the CSIStorageCapacity object is obsolete and should be removed by its creator. This field is immutable. 4.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csistoragecapacities GET : list or watch objects of kind CSIStorageCapacity /apis/storage.k8s.io/v1/watch/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities DELETE : delete collection of CSIStorageCapacity GET : list or watch objects of kind CSIStorageCapacity POST : create a CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} DELETE : delete a CSIStorageCapacity GET : read the specified CSIStorageCapacity PATCH : partially update the specified CSIStorageCapacity PUT : replace the specified CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} GET : watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/storage.k8s.io/v1/csistoragecapacities HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.1. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty 4.2.2. /apis/storage.k8s.io/v1/watch/csistoragecapacities HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities HTTP method DELETE Description delete collection of CSIStorageCapacity Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.5. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIStorageCapacity Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 202 - Accepted CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.4. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity HTTP method DELETE Description delete a CSIStorageCapacity Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIStorageCapacity Table 4.13. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIStorageCapacity Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIStorageCapacity Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.6. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} Table 4.19. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity HTTP method GET Description watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage_apis/csistoragecapacity-storage-k8s-io-v1
Installation overview
Installation overview OpenShift Container Platform 4.12 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installation_overview/index
Chapter 3. Configuring IAM for IBM Cloud
Chapter 3. Configuring IAM for IBM Cloud In environments where the cloud identity and access management (IAM) APIs are not reachable, you must put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. Storing an administrator-level credential secret in the cluster kube-system project is not supported for IBM Cloud(R); therefore, you must set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources About the Cloud Credential Operator 3.2. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys for IBM Cloud(R) 3.3. steps Installing a cluster on IBM Cloud(R) with customizations 3.4. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_cloud/configuring-iam-ibm-cloud
Chapter 1. Network APIs
Chapter 1. Network APIs 1.1. CloudPrivateIPConfig [cloud.network.openshift.io/v1] Description CloudPrivateIPConfig performs an assignment of a private IP address to the primary NIC associated with cloud VMs. This is done by specifying the IP and Kubernetes node which the IP should be assigned to. This CRD is intended to be used by the network plugin which manages the cluster network. The spec side represents the desired state requested by the network plugin, and the status side represents the current state that this CRD's controller has executed. No users will have permission to modify it, and if a cluster-admin decides to edit it for some reason, their changes will be overwritten the time the network plugin reconciles the object. Note: the CR's name must specify the requested private IP address (can be IPv4 or IPv6). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object 1.3. EgressIP [k8s.ovn.org/v1] Description EgressIP is a CRD allowing the user to define a fixed source IP for all egress traffic originating from any pods which match the EgressIP resource according to its spec definition. Type object 1.4. EgressQoS [k8s.ovn.org/v1] Description EgressQoS is a CRD that allows the user to define a DSCP value for pods egress traffic on its namespace to specified CIDRs. Traffic from these pods will be checked against each EgressQoSRule in the namespace's EgressQoS, and if there is a match the traffic is marked with the relevant DSCP value. Type object 1.5. Endpoints [v1] Description Endpoints is a collection of endpoints that implement the actual service. Example: Type object 1.6. EndpointSlice [discovery.k8s.io/v1] Description EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints. Type object 1.7. EgressRouter [network.operator.openshift.io/v1] Description EgressRouter is a feature allowing the user to define an egress router that acts as a bridge between pods and external systems. The egress router runs a service that redirects egress traffic originating from a pod or a group of pods to a remote external system or multiple destinations as per configuration. It is consumed by the cluster-network-operator. More specifically, given an EgressRouter CR with <name>, the CNO will create and manage: - A service called <name> - An egress pod called <name> - A NAD called <name> Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). EgressRouter is a single egressrouter pod configuration object. Type object 1.8. Ingress [networking.k8s.io/v1] Description Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Type object 1.9. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 1.10. IPPool [whereabouts.cni.cncf.io/v1alpha1] Description IPPool is the Schema for Whereabouts for IP address allocation Type object 1.11. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1] Description NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec Type object 1.12. NetworkPolicy [networking.k8s.io/v1] Description NetworkPolicy describes what network traffic is allowed for a set of Pods Type object 1.13. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object 1.14. PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1] Description PodNetworkConnectivityCheck Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.15. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.16. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object
[ "Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/network-apis
Chapter 9. Creating secure HTTP load balancers
Chapter 9. Creating secure HTTP load balancers You can create various types of load balancers to manage secure HTTP (HTTPS) network traffic. Section 9.1, "About non-terminated HTTPS load balancers" Section 9.2, "Creating a non-terminated HTTPS load balancer" Section 9.3, "About TLS-terminated HTTPS load balancers" Section 9.4, "Creating a TLS-terminated HTTPS load balancer" Section 9.5, "Creating a TLS-terminated HTTPS load balancer with SNI" Section 9.6, "Creating HTTP and TLS-terminated HTTPS load balancing on the same IP and back-end" 9.1. About non-terminated HTTPS load balancers A non-terminated HTTPS load balancer acts effectively like a generic TCP load balancer: the load balancer forwards the raw TCP traffic from the web client to the back-end servers where the HTTPS connection is terminated with the web clients. While non-terminated HTTPS load balancers do not support advanced load balancer features like Layer 7 functionality, they do lower load balancer resource utilization by managing the certificates and keys themselves. 9.2. Creating a non-terminated HTTPS load balancer If your application requires HTTPS traffic to terminate on the back-end member servers, typically called HTTPS pass through , you can use the HTTPS protocol for your load balancer listeners. Prerequisites A private subnet that contains back-end servers that host HTTPS applications that are configured with a TLS-encrypted web application on TCP port 443. The back-end servers are configured with a health check at the URL path / . A shared external (public) subnet that you can reach from the internet. Procedure Source your credentials file. Example Create a load balancer ( lb1 ) on a public subnet ( public_subnet ): Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Monitor the state of the load balancer. Example Before going to the step, ensure that the provisioning_status is ACTIVE . Create a listener ( listener1 ) on a port ( 443 ). Example Create the listener default pool ( pool1 ). Example Create a health monitor on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Example Add load balancer members ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the default pool. Example Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( b85c807e-4d7c-4cbd-b725-5e8afddf80d2 ) has an ONLINE value for its operating_status . Example Sample output Additional resources Manage Secrets with OpenStack Key Manager guide loadbalancer in the Command Line Interface Reference 9.3. About TLS-terminated HTTPS load balancers When a TLS-terminated HTTPS load balancer is implemented, web clients communicate with the load balancer over Transport Layer Security (TLS) protocols. The load balancer terminates the TLS session and forwards the decrypted requests to the back-end servers. When you terminate the TLS session on the load balancer, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection. 9.4. Creating a TLS-terminated HTTPS load balancer When you use TLS-terminated HTTPS load balancers, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites A private subnet that contains back-end servers that host non-secure HTTP applications on TCP port 80. The back-end servers are configured with a health check at the URL path / . A shared external (public) subnet that you can reach from the internet. TLS public-key cryptography is configured with the following characteristics: A TLS certificate, key, and intermediate certificate chain is obtained from an external certificate authority (CA) for the DNS name that is assigned to the load balancer VIP address, for example, www.example.com . The certificate, key, and intermediate certificate chain reside in separate files in the current directory. The key and certificate are PEM-encoded. The key is not encrypted with a passphrase. The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together. You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Manage Secrets with OpenStack Key Manager guide. Procedure Combine the key ( server.key ), certificate ( server.crt ), and intermediate certificate chain ( ca-chain.crt ) into a single PKCS12 file ( server.p12 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Note The following procedure does not work if you password protect the PKCS12 file. Source your credentials file. Example Use the Key Manager service to create a secret resource ( tls_secret1 ) for the PKCS12 file. Example Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Example Monitor the state of the load balancer. Example Before going to the step, ensure that the provisioning_status is ACTIVE . Create a TERMINATED_HTTPS listener ( listener1 ), and reference the secret resource as the default TLS container for the listener. Example Create a pool ( pool1 ) and make it the default pool for the listener. Example Create a health monitor on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Example Add the non-secure HTTP back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Example Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( b85c807e-4d7c-4cbd-b725-5e8afddf80d2 ) has an ONLINE value for its operating_status . Example Sample output Additional resources Manage Secrets with OpenStack Key Manager guide loadbalancer in the Command Line Interface Reference 9.5. Creating a TLS-terminated HTTPS load balancer with SNI For TLS-terminated HTTPS load balancers that employ Server Name Indication (SNI) technology, a single listener can contain multiple TLS certificates and enable the load balancer to know which certificate to present when it uses a shared IP. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites A private subnet that contains back-end servers that host non-secure HTTP applications on TCP port 80. The back-end servers are configured with a health check at the URL path / . A shared external (public) subnet that you can reach from the internet. TLS public-key cryptography is configured with the following characteristics: Multiple TLS certificates, keys, and intermediate certificate chains have been obtained from an external certificate authority (CA) for the DNS names assigned to the load balancer VIP address, for example, www.example.com and www2.example.com . The keys and certificates are PEM-encoded. The keys are not encrypted with passphrases. You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Manage Secrets with OpenStack Key Manager guide. Procedure For each of the TLS certificates in the SNI list, combine the key ( server.key ), certificate ( server.crt ), and intermediate certificate chain ( ca-chain.crt ) into a single PKCS12 file ( server.p12 ). In this example, you create two PKCS12 files ( server.p12 and server2.p12 ) one for each certificate ( www.example.com and www2.example.com ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Source your credentials file. Example Use the Key Manager service to create secret resources ( tls_secret1 and tls_secret2 ) for the PKCS12 file. Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Monitor the state of the load balancer. Example Before going to the step, ensure that the provisioning_status is ACTIVE . Create a TERMINATED_HTTPS listener ( listener1 ), and use SNI to reference both the secret resources. (Reference tls_secret1 as the default TLS container for the listener.) Create a pool ( pool1 ) and make it the default pool for the listener. Create a health monitor on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ). Example Add the non-secure HTTP back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( b85c807e-4d7c-4cbd-b725-5e8afddf80d2 ) has an ONLINE value for its operating_status . Example Sample output Additional resources Manage Secrets with OpenStack Key Manager guide loadbalancer in the Command Line Interface Reference 9.6. Creating HTTP and TLS-terminated HTTPS load balancing on the same IP and back-end You can configure a non-secure listener and a TLS-terminated HTTPS listener on the same load balancer and the same IP address when you want to respond to web clients with the exact same content, regardless if the client is connected with a secure or non-secure HTTP protocol. It is a best practice to also create a health monitor to ensure that your back-end members remain available. Prerequisites A private subnet that contains back-end servers that host non-secure HTTP applications on TCP port 80. The back-end servers are configured with a health check at the URL path / . A shared external (public) subnet that you can reach from the internet. TLS public-key cryptography is configured with the following characteristics: A TLS certificate, key, and optional intermediate certificate chain have been obtained from an external certificate authority (CA) for the DNS name assigned to the load balancer VIP address (for example, www.example.com). The certificate, key, and intermediate certificate chain reside in separate files in the current directory. The key and certificate are PEM-encoded. The key is not encrypted with a passphrase. The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together. You have configured the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Manage Secrets with OpenStack Key Manager guide. The non-secure HTTP listener is configured with the same pool as the HTTPS TLS-terminated load balancer. Procedure Combine the key ( server.key ), certificate ( server.crt ), and intermediate certificate chain ( ca-chain.crt ) into a single PKCS12 file ( server.p12 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Source your credentials file. Example Use the Key Manager service to create a secret resource ( tls_secret1 ) for the PKCS12 file. Create a load balancer ( lb1 ) on the public subnet ( public_subnet ). Monitor the state of the load balancer. Example Before going to the step, ensure that the provisioning_status is ACTIVE . Create a TERMINATED_HTTPS listener ( listener1 ), and reference the secret resource as the default TLS container for the listener. Create a pool ( pool1 ) and make it the default pool for the listener. Create a health monitor on the pool ( pool1 ) that connects to the back-end servers and tests the path ( / ): Example Add the non-secure HTTP back-end servers ( 192.0.2.10 and 192.0.2.11 ) on the private subnet ( private_subnet ) to the pool. Create a non-secure, HTTP listener ( listener2 ), and make its default pool, the same as the secure listener. Verification View and verify the load balancer ( lb1 ) settings. Example Sample output When a health monitor is present and functioning properly, you can check the status of each member. A working member ( b85c807e-4d7c-4cbd-b725-5e8afddf80d2 ) has an ONLINE value for its operating_status . Example Sample output Additional resources Manage Secrets with OpenStack Key Manager guide loadbalancer in the Command Line Interface Reference
[ "source ~/overcloudrc", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet", "openstack loadbalancer show lb1", "openstack loadbalancer listener create --name listener1 --protocol HTTPS --protocol-port 443 lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTPS", "openstack loadbalancer healthmonitor create --delay 15 --max-retries 4 --timeout 10 --type TLS-HELLO --url-path / pool1", "openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 443 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 443 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 b85c807e-4d7c-4cbd-b725-5e8afddf80d2", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 443 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12", "source ~/overcloudrc", "openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server.p12)\"", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet", "openstack loadbalancer show lb1", "openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 b85c807e-4d7c-4cbd-b725-5e8afddf80d2", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12 openssl pkcs12 -export -inkey server2.key -in server2.crt -certfile ca-chain2.crt -passout pass: -out server2.p12", "source ~/overcloudrc", "openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server.p12)\" openstack secret store --name='tls_secret2' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server2.p12)\"", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet", "openstack loadbalancer show lb1", "openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') --sni-container-refs USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') USD(openstack secret list | awk '/ tls_secret2 / {print USD2}') -- lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 b85c807e-4d7c-4cbd-b725-5e8afddf80d2", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+", "openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12", "source ~/overcloudrc", "openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload=\"USD(base64 < server.p12)\"", "openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet", "openstack loadbalancer show lb1", "openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=USD(openstack secret list | awk '/ tls_secret1 / {print USD2}') lb1", "openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP", "openstack loadbalancer healthmonitor create --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1", "openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1", "openstack loadbalancer listener create --protocol-port 80 --protocol HTTP --name listener2 --default-pool pool1 lb1", "openstack loadbalancer show lb1", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+", "openstack loadbalancer member show pool1 b85c807e-4d7c-4cbd-b725-5e8afddf80d2", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2022-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2022-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_octavia_for_load_balancing-as-a-service/create-secure-lbs_rhosp-lbaas
Deploying and managing OpenShift Container Storage using Red Hat OpenStack Platform
Deploying and managing OpenShift Container Storage using Red Hat OpenStack Platform Red Hat OpenShift Container Storage 4.8 How to install and manage Red Hat Storage Documentation Team Abstract Read this document for instructions on installing and managing Red Hat OpenShift Container Storage on Red Hat OpenStack Platform (RHOSP). Important Deploying and managing OpenShift Container Storage on Red Hat OpenStack Platform is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/index
probe::scsi.iocompleted
probe::scsi.iocompleted Name probe::scsi.iocompleted - SCSI mid-layer running the completion processing for block device I/O requests Synopsis scsi.iocompleted Values device_state The current state of the device dev_id The scsi device id req_addr The current struct request pointer, as a number data_direction_str Data direction, as a string device_state_str The current state of the device, as a string lun The lun number goodbytes The bytes completed data_direction The data_direction specifies whether this command is from/to the device channel The channel number host_no The host number
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scsi-iocompleted
4.4.4. Changing the Parameters of a Logical Volume Group
4.4.4. Changing the Parameters of a Logical Volume Group To change the parameters of a logical volume, use the lvchange command. For a listing of the parameters you can change, see the lvchange (8) man page. You can use the lvchange command to activate and deactivate logical volumes. To activate and deactivate all the logical volumes in a volume group at the same time, use the vgchange command, as described in Section 4.3.6, "Changing the Parameters of a Volume Group" . The following command changes the permission on volume lvol1 in volume group vg00 to be read-only.
[ "lvchange -pr vg00/lvol1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/lv_change
Support
Support OpenShift Container Platform 4.10 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc api-resources -o name | grep config.openshift.io", "oc explain <resource_name>.config.openshift.io", "oc get <resource_name>.config -o yaml", "oc edit <resource_name>.config -o yaml", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc get route prometheus-k8s -n openshift-monitoring -o jsonpath=\"{.spec.host}\"", "{__name__=~\"cluster:usage:.*|count:up0|count:up1|cluster_version|cluster_version_available_updates|cluster_operator_up|cluster_operator_conditions|cluster_version_payload|cluster_installer|cluster_infrastructure_provider|cluster_feature_set|instance:etcd_object_counts:sum|ALERTS|code:apiserver_request_total:rate:sum|cluster:capacity_cpu_cores:sum|cluster:capacity_memory_bytes:sum|cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum|openshift:cpu_usage_cores:sum|openshift:memory_usage_bytes:sum|workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum|cluster:virt_platform_nodes:sum|cluster:node_instance_type_count:sum|cnv:vmi_status_running:count|cluster:vmi_request_cpu_cores:sum|node_role_os_version_machine:cpu_capacity_cores:sum|node_role_os_version_machine:cpu_capacity_sockets:sum|subscription_sync_total|olm_resolution_duration_seconds|csv_succeeded|csv_abnormal|cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum|cluster:kubelet_volume_stats_used_bytes:provisioner:sum|ceph_cluster_total_bytes|ceph_cluster_total_used_raw_bytes|ceph_health_status|job:ceph_osd_metadata:count|job:kube_pv:count|job:ceph_pools_iops:total|job:ceph_pools_iops_bytes:total|job:ceph_versions_running:count|job:noobaa_total_unhealthy_buckets:sum|job:noobaa_bucket_count:sum|job:noobaa_total_object_count:sum|noobaa_accounts_num|noobaa_total_usage|console_url|cluster:network_attachment_definition_instances:max|cluster:network_attachment_definition_enabled_instance_up:max|cluster:ingress_controller_aws_nlb_active:sum|insightsclient_request_send_total|cam_app_workload_migrations|cluster:apiserver_current_inflight_requests:sum:max_over_time:2m|cluster:alertmanager_integrations:max|cluster:telemetry_selected_series:count|openshift:prometheus_tsdb_head_series:sum|openshift:prometheus_tsdb_head_samples_appended_total:sum|monitoring:container_memory_working_set_bytes:sum|namespace_job:scrape_series_added:topk3_sum1h|namespace_job:scrape_samples_post_metric_relabeling:topk3|monitoring:haproxy_server_http_responses_total:sum|rhmi_status|cluster_legacy_scheduler_policy|cluster_master_schedulable|che_workspace_status|che_workspace_started_total|che_workspace_failure_total|che_workspace_start_time_seconds_sum|che_workspace_start_time_seconds_count|cco_credentials_mode|cluster:kube_persistentvolume_plugin_type_counts:sum|visual_web_terminal_sessions_total|acm_managed_cluster_info|cluster:vsphere_vcenter_info:sum|cluster:vsphere_esxi_version_total:sum|cluster:vsphere_node_hw_version_total:sum|openshift:build_by_strategy:sum|rhods_aggregate_availability|rhods_total_users|instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile|instance:etcd_mvcc_db_total_size_in_bytes:sum|instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile|instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum|instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile|jaeger_operator_instances_storage_types|jaeger_operator_instances_strategies|jaeger_operator_instances_agent_strategies|appsvcs:cores_by_product:sum|nto_custom_profiles:count|openshift_csi_share_configmap|openshift_csi_share_secret|openshift_csi_share_mount_failures_total|openshift_csi_share_mount_requests_total\",alertstate=~\"firing|\"}", "INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)", "oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data", "oc extract secret/pull-secret -n openshift-config --to=.", "\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } }", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret", "cp pull-secret pull-secret-backup", "set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret", "oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running", "oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1", "{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }", "apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]", "oc get -n openshift-insights deployment insights-operator -o yaml", "initContainers: - name: insights-operator image: <your_insights_operator_image_version> terminationMessagePolicy: FallbackToLogsOnError volumeMounts:", "oc apply -n openshift-insights -f gather-job.yaml", "oc describe -n openshift-insights job/insights-operator-job", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job- <your_job>", "oc logs -n openshift-insights insights-operator-job- <your_job> insights-operator", "I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms", "oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data", "oc delete -n openshift-insights job insights-operator-job", "oc extract secret/pull-secret -n openshift-config --to=.", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }", "curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload", "* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc import-image is/must-gather -n openshift", "oc adm must-gather", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 2", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather -- /usr/bin/gather_audit_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc get nodes", "oc debug node/my-cluster-node", "oc new-project dummy", "oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'", "oc debug node/my-cluster-node", "chroot /host", "toolbox", "sosreport -k crio.all=on -k crio.logs=on 1", "Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e", "redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc adm must-gather --dest-dir /tmp/captures \\ <.> --source-dir '/tmp/tcpdump/' \\ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\ <.> --node-selector 'node-role.kubernetes.io/worker' \\ <.> --host-network=true \\ <.> --timeout 30s \\ <.> -- tcpdump -i any \\ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300", "tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp", "oc get nodes", "oc debug node/my-cluster-node", "chroot /host", "ip ad", "toolbox", "tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "chroot /host crictl ps", "chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'", "nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap 1", "redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1", "oc get nodes", "oc debug node/my-cluster-node", "chroot /host", "toolbox", "redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz 1", "chroot /host", "toolbox", "dnf install -y <package_name>", "chroot /host", "vi ~/.toolboxrc", "REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3", "toolbox", "oc get clusterversion", "oc describe clusterversion", "ssh <user_name>@<load_balancer> systemctl status haproxy", "ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'", "ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'", "dig <wildcard_fqdn> @<dns_server>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1", "./openshift-install create ignition-configs --dir=./install_dir", "tail -f ~/<installation_directory>/.openshift_install.log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service", "curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1", "grep -is 'bootstrap.ign' /var/log/httpd/access_log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "curl -I http://<http_server_fqdn>:<port>/master.ign 1", "grep -is 'master.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <master_node>", "oc get daemonsets -n openshift-sdn", "oc get pods -n openshift-sdn", "oc logs <sdn_pod> -n openshift-sdn", "oc get network.config.openshift.io cluster -o yaml", "./openshift-install create manifests", "oc get pods -n openshift-network-operator", "oc logs pod/<network_operator_pod_name> -n openshift-network-operator", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/master", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get pods -n openshift-etcd", "oc get pods -n openshift-etcd-operator", "oc describe pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -c <container_name> -n <namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'", "curl -I http://<http_server_fqdn>:<port>/worker.ign 1", "grep -is 'worker.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <worker_node>", "oc get pods -n openshift-machine-api", "oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy", "oc adm node-logs --role=worker -u kubelet", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=worker -u crio", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=worker --path=sssd", "oc adm node-logs --role=worker --path=sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/worker", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get clusteroperators", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc describe clusteroperator <operator_name>", "oc get pods -n <operator_namespace>", "oc describe pod/<operator_pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -n <operator_namespace>", "oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>", "oc adm release info <image_path>:<tag> --commits", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address>\" 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "oc get nodes", "oc adm top nodes", "oc adm top node my-node", "oc debug node/my-node", "chroot /host", "systemctl is-active kubelet", "systemctl status kubelet", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc debug node/my-node", "chroot /host", "systemctl is-active crio", "systemctl status crio.service", "oc adm node-logs --role=master -u crio", "oc adm node-logs <node_name> -u crio", "ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory", "can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.", "oc adm cordon <nodename>", "oc adm drain <nodename> --ignore-daemonsets --delete-emptydir-data", "ssh [email protected] sudo -i", "systemctl stop kubelet", ".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done", "crictl rmp -fa", "systemctl stop crio", "crio wipe -f", "systemctl start crio systemctl start kubelet", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.23.0", "oc adm uncordon <nodename>", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.23.0", "rpm-ostree kargs --append='crashkernel=256M'", "systemctl enable kdump.service", "systemctl reboot", "variant: openshift version: 4.10.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true", "butane 99-worker-kdump.bu -o 99-worker-kdump.yaml", "oc create -f ./99-worker-kdump.yaml", "systemctl --failed", "journalctl -u <unit>.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-nodenet-override spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_script> mode: 0755 overwrite: true path: /usr/local/bin/override-node-ip.sh systemd: units: - contents: | [Unit] Description=Override node IP detection Wants=network-online.target Before=kubelet.service After=network-online.target [Service] Type=oneshot ExecStart=/usr/local/bin/override-node-ip.sh ExecStart=systemctl daemon-reload [Install] WantedBy=multi-user.target enabled: true name: nodenet-override.service", "E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]", "oc debug node/<node_name>", "chroot /host", "ovs-appctl vlog/list", "console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO", "Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg", "systemctl daemon-reload", "systemctl restart ovs-vswitchd", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service", "oc apply -f 99-change-ovs-loglevel.yaml", "oc adm node-logs <node_name> -u ovs-vswitchd", "journalctl -b -f -u ovs-vswitchd.service", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc get clusteroperators", "oc get pod -n <operator_namespace>", "oc describe pod <operator_pod_name> -n <operator_namespace>", "oc debug node/my-node", "chroot /host", "crictl ps", "crictl ps --name network-operator", "oc get pods -n <operator_namespace>", "oc logs pod/<pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "true", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "false", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'", "oc get namespaces", "operator-ns-1 Terminating", "oc get crds", "oc delete crd <crd_name>", "oc get EtcdCluster -n <namespace_name>", "oc get EtcdCluster --all-namespaces", "oc delete <cr_name> <cr_instance_name> -n <namespace_name>", "oc get namespace <namespace_name>", "oc get sub,csv,installplan -n <namespace>", "oc project <project_name>", "oc get pods", "oc status", "skopeo inspect docker://<image_reference>", "oc edit deployment/my-deployment", "oc get pods -w", "oc get events", "oc logs <pod_name>", "oc logs <pod_name> -c <container_name>", "oc exec <pod_name> ls -alh /var/log", "oc exec <pod_name> cat /var/log/<path_to_log>", "oc exec <pod_name> -c <container_name> ls /var/log", "oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>", "oc project <namespace>", "oc rsh <pod_name> 1", "oc rsh -c <container_name> pod/<pod_name>", "oc port-forward <pod_name> <host_port>:<pod_port> 1", "oc get deployment -n <project_name>", "oc debug deployment/my-deployment --as-root -n <project_name>", "oc get deploymentconfigs -n <project_name>", "oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>", "oc cp <local_path> <pod_name>:/<path> -c <container_name> 1", "oc cp <pod_name>:/<path> -c <container_name><local_path> 1", "oc get pods -w 1", "oc logs -f pod/<application_name>-<build_number>-build", "oc logs -f pod/<application_name>-<build_number>-deploy", "oc logs -f pod/<application_name>-<build_number>-<random_string>", "oc describe pod/my-app-1-akdlg", "oc logs -f pod/my-app-1-akdlg", "oc exec my-app-1-akdlg -- cat /var/log/my-application.log", "oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log", "oc exec -it my-app-1-akdlg /bin/bash", "oc debug node/my-cluster-node", "chroot /host", "crictl ps", "crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'", "nsenter -n -t 31150 -- ip ad", "Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4", "oc delete pod <old_pod> --force=true --grace-period=0", "oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator", "ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "C:\\> net user <username> * 1", "oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods", "oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log", "oc adm node-logs -l kubernetes.io/os=windows --path=journal", "oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker", "C:\\> powershell", "C:\\> Get-EventLog -LogName Application -Source Docker", "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10,count by (job)({__name__=~\".+\"}))", "oc <options> --loglevel <log_level>", "oc whoami -t" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/support/index
Chapter 8. Managing Containers
Chapter 8. Managing Containers You can automate the deployment of applications inside Linux containers using RHUI. Using containers offers the following advantages: Requires less storage and in-memory space than VMs: Because the containers hold only what is needed to run an application, saving and sharing is more efficient with containers than it is with VMs that include entire operating systems. Improved performance: Because you are not running an entirely separate operating system, a container typically runs faster than an application that carries the overhead of a new VM. Secure: Because a container typically has its own network interfaces, file system, and memory, the application running in that container can be isolated and secured from other activities on a host computer. Flexible: With an application's runtime requirements included with the application in the container, a container can run in multiple environments. 8.1. Understanding containers in Red Hat Update Infrastructure A container is an application sandbox. Each container is based on an image that holds necessary configuration data. When you launch a container from an image, a writable layer is added on top of this image. Every time you commit a container, a new image layer is added to store your changes. An image is a read-only layer that is never modified. All changes are made in the top-most writable layer, and the changes can be saved only by creating a new image. Each image depends on one or more parent images. A platform image is an image that has no parent. Platform images define the runtime environment, packages, and utilities necessary for a containerized application to run. The platform image is read-only, so any changes are reflected in the copied images stacked on top of it. 8.2. Adding a container to Red Hat Update Infrastructure You can use the rhui-manager tool to add containers using the Repository Management section. Procedure If you did not enable container support when you installed RHUI, run the following commands on the RHUA: Optional: Edit the /etc/rhui/rhui-tools.conf file and set the container registry credentials in the RHUI configuration by removing the following lines in the [container] section. If you have a clean installation of RHUI 4.1.1 or newer, the last several lines contain a [container] section with podman-specific options and handy comments. If you have updated from an earlier version of RHUI, the section is available at the end of the etc/rhui/rhui-tools.conf.rpmnew file, and you can copy it to the rhui-tools.conf file. Note If you normally synchronize from a registry different from registry.redhat.io , also change the values of the registry_url and registry_auth options accordingly. On the RHUA node, run rhui-manager : Press r to access the Repository Management screen. Press ac to add a new Red Hat container. If the container you want to add exists in a non-default registry, enter the registry URL. Press Enter without entering anything to use the default registry. Enter the name of the container in the registry: Enter a unique ID for the container. rhui-manager converts the name of the container from the registry to the format that is usable in Pulp by replacing slashes and dots with underscores. You can use such a converted name by pressing Enter or by entering a name of your choice. Enter a display name for the container. Optional: Set your login and password in the RHUI configuration if prompted. Verify the displayed summary. Press y to proceed and add the container. 8.3. Synchronizing container repositories After you add your container to Red Hat Update Infrastructure, you can use the rhui-manager tool to synchronize the container. Procedure On the RHUA node, run rhui-manager : Press s to access the synchronization status and scheduling screen. Press sr to synchronize an individual repository immediately. Enter the number of the repository that you wish to synchronize. Press c to confirm the selection. Verify the repository and press y to synchronize or n to cancel. 8.4. Generating container client configurations RHUI clients can pull containers from RHUI using client configuration. The RPM contains the load balancer's certificate and you can use it to add the load balancer to the container registry and to modify container configuration. Procedure On the RHUA node, run rhui-manager : Press e to access the entitlement certificates and client configuration RPMs screen. Press d to create a container client configuration RPM . Enter the full path of a local directory where you want to save the configuration files. Enter the name of the RPM. Enter the version number of the configuration RPM. The default is 2.0 . Enter the release number of the configuration RPM. The default is 1 . Enter the number of days the certificate should be valid. The default is 365 . 8.5. Installing a container configuration RPM on the client After generating the container configuration RPM, you can install it on a client by importing it to your local machine. Procedure Retrieve the RPM from the RHUA node to your local machine: Transfer the RPM from the local machine to the client. Switch to the client and install the RPM: 8.6. Testing the podman pull command on the client You can use the podman pull command to verify the content on the container. Procedure Run the podman pull command. If the podman pull command fails, check the rhui-manager status. The synchronization probably has not been performed yet and you have to wait until it synchronizes.
[ "rhui-installer --rerun --container-support-enabled True # rhui-manager --noninteractive cds reinstall --all", "[container] ... registry_username: your_RH_login registry_password: your_RH_password", "rhui-manager", "-= Red Hat Update Infrastructure Management Tool =- -= Repository Management =- l list repositories currently managed by the RHUI i display detailed information on a repository a add a new Red Hat content repository ac add a new Red Hat container c create a new custom repository (RPM content only) d delete a repository from the RHUI u upload content to a custom repository (RPM content only) ur upload content from a remote web site (RPM content only) p list packages in a repository (RPM content only) Connected: rhua.example.com", "rhui (repo) => ac Specify URL of registry [https://registry.redhat.io]:", "jboss-eap-6/eap64-openshift", "jboss-eap-6_eap64-openshift", "The following container will be added: Registry URL: http://registry.redhat.io Container Id: jboss-eap-6_eap64-openshift Display Name: jboss-eap-6_eap64-openshift Upstream Container Name: jboss-eap-6/eap64-openshift Proceed? (y/n)", "y Successfully added container jboss-eap-6_eap64-openshift", "rhui-manager", "The following repositories will be scheduled for synchronization: jboss-eap-6_eap64-openshift Proceed? (y/n) y Scheduling sync for jboss-eap-6_eap64-openshift ... successfully scheduled for the next available timeslot.", "rhui-manager", "/root/", "containertest", "Successfully created client configuration RPM. Location: /root/containertest-2.0/build/RPMS/noarch/containertest-2.0-1.noarch.rpm", "scp [email protected]:/root/containertest-2.0/build/RPMS/noarch/containertest-2.0-1.noarch.rpm .", "scp containertest-2.0-1.noarch.rpm [email protected]:.", "yum install containertest-2.0-1.noarch.rpm", "podman pull jboss-eap-6_eap64-openshift Resolving \"jboss-eap-6_eap64-openshift\" using unqualified-search registries (/etc/containers/registries.conf) Trying to pull cds.example.com/jboss-eap-6_eap64-openshift:latest Getting image source signatures Copying blob b0e0b761a531 done Copying blob aa23ac04e287 done Copying blob 0d30ea1353f9 done Copying config 3d0728c907 done Writing manifest to image destination Storing signatures 3d0728c907d55d9faedc4d19de003f21e2a1ebdf3533b3d670a4e2f77c6b35d2", "Resolving \"jboss-eap-6_eap64-openshift\" using unqualified-search registries (/etc/containers/registries.conf) Trying to pull cds.example.com/jboss-eap-6_eap64-openshift:latest Error: initializing source docker://cds.example.com/jboss-eap-6_eap64-openshift:latest: reading manifest latest in cds.example.com/jboss-eap-6_eap64-openshift: manifest unknown: Manifest not found." ]
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/configuring_and_managing_red_hat_update_infrastructure/assembly_assembly-cmg-managing-containers
Chapter 7. Configuring HA cluster resources on Red Hat OpenStack Platform
Chapter 7. Configuring HA cluster resources on Red Hat OpenStack Platform The following table lists the RHOSP-specific resource agents you use to configure resources for an HA cluster on RHOSP. openstack-info (required) Provides support for RHOSP-specific resource agents. You must configure an openstack-info resource as a cloned resource for your cluster in order to run any other RHOSP-specific resource agent other than the fence_openstack fence agent. For information about configuring an openstack-info resource see Configuring an openstack-info resource in an HA cluster on Red Hat OpenStack Platform . openstack-virtual-ip Configures a virtual IP address resource. For information about configuring an openstack-virtual-ip resource, see Configuring a virtual IP address in an HA cluster on Red Hat Openstack Platform . openstack-floating-ip Configures a floating IP address resource. For information about configuring an openstack-floating-ip resource, see Configuring a floating IP address in an HA cluster on Red Hat OpenStack Platform . openstack-cinder-volume Configures a block storage resource. For information about configuring an openstack-cinder-volume resource, see Configuring a block storage resource in an HA cluster on Red Hat OpenStack Platform . When configuring other cluster resources, use the standard Pacemaker resource agents. 7.1. Configuring an openstack-info resource in an HA cluster on Red Hat OpenStack Platform (required) You must configure an openstack-info resource in order to run any other RHOSP-specific resource agent except for the fence_openstack fence agent. This procedure to create an openstack-info resource uses a clouds.yaml file for RHOSP authentication. Prerequisites A configured HA cluster running on RHOSP Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-info resource agent, run the following command. Create the openstack-info resource as a clone resource. In this example, the resource is also named openstack-info . This example uses a clouds.yaml configuration file and the cloud= parameter is set to the name of the cloud in your clouds.yaml file. Check the cluster status to verify that the resource is running. 7.2. Configuring a virtual IP address in an HA cluster on Red Hat Openstack Platform This procedure to create an RHOSP virtual IP address resource for an HA cluster on an RHOSP platform uses a clouds.yaml file for RHOSP authentication. The RHOSP virtual IP resource operates in conjunction with an IPaddr2 cluster resource. When you configure an RHOSP virtual IP address resource, the resource agent ensures that the RHOSP infrastructure associates the virtual IP address with a cluster node on the network. This allows an IPaddr2 resource to function on that node. Prerequisites A configured HA cluster running on RHOSP An assigned IP address to use as the virtual IP address Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-virtual-ip resource agent, run the following command. Run the following command to determine the subnet ID for the virtual IP address you are using. In this example, the virtual IP address is 172.16.0.119. Create the RHOSP virtual IP address resource. The following command creates an RHOSP virtual IP address resource for an IP address of 172.16.0.119, specifying the subnet ID you determined in the step. Configure ordering and location constraints: Ensure that the openstack-info resource starts before the virtual IP address resource. Ensure that the Virtual IP address resource runs on the same node as the openstack-info resource. Create an IPaddr2 resource for the virtual IP address. Configure ordering and location constraints to ensure that the openstack-virtual-ip resource starts before the IPaddr2 resource and that the IPaddr2 resource runs on the same node as the openstack-virtual-ip resource. Verification Verify the resource constraint configuration. Check the cluster status to verify that the resources are running. 7.3. Configuring a floating IP address in an HA cluster on Red Hat OpenStack Platform The following procedure creates a floating IP address resource for an HA cluster on RHOSP. This procedure uses a clouds.yaml file for RHOSP authentication. Prerequisites A configured HA cluster running on RHOSP An IP address on the public network to use as the floating IP address, assigned by the RHOSP administrator Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-floating-ip resource agent, run the following command. Find the subnet ID for the address on the public network that you will use to create the floating IP address resource. The public network is usually the network with the default gateway. Run the following command to display the default gateway address. Run the following command to find the subnet ID for the public network. This command generates a table with ID and Subnet headings. Create the floating IP address resource, specifying the public IP address for the resource and the subnet ID for that address. When you configure the floating IP address resource, the resource agent configures a virtual IP address on the public network and associates it with a cluster node. Configure an ordering constraint to ensure that the openstack-info resource starts before the floating IP address resource. Configure a location constraint to ensure that the floating IP address resource runs on the same node as the openstack-info resource. Verification Verify the resource constraint configuration. Check the cluster status to verify that the resources are running. 7.4. Configuring a block storage resource in an HA cluster on Red Hat OpenStack Platform The following procedure creates a block storage resource for an HA cluster on RHOSP. This procedure uses a clouds.yaml file for RHOSP authentication. Prerequisites A configured HA cluster running on RHOSP A block storage volume created by the RHOSP administrator Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-cinder-volume resource agent, run the following command. Determine the volume ID of the block storage volume you are configuring as a cluster resource. Run the following command to display a table of available volumes that includes the UUID and name of each volume. If you already know the volume name, you can run the following command, specifying the volume you are configuring. This displays a table with an ID field. Create the block storage resource, specifying the ID for the volume. Configure an ordering constraint to ensure that the openstack-info resource starts before the block storage resource. Configure a location constraint to ensure that the block storage resource runs on the same node as the openstack-info resource. Verification Verify the resource constraint configuration. Check the cluster status to verify that the resource is running.
[ "pcs resource describe openstack-info", "pcs resource create openstack-info openstack-info cloud=\"ha-example\" clone", "pcs status Full List of Resources: * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ]", "pcs resource describe openstack-virtual-ip", "openstack --os-cloud=ha-example subnet list +--------------------------------------+ ... +----------------+ | ID | ... | Subnet | +--------------------------------------+ ... +----------------+ | 723c5a77-156d-4c3b-b53c-ee73a4f75185 | ... | 172.16.0.0/24 | +--------------------------------------+ ... +----------------+", "pcs resource create ClusterIP-osp ocf:heartbeat:openstack-virtual-ip cloud=ha-example ip=172.16.0.119 subnet_id=723c5a77-156d-4c3b-b53c-ee73a4f75185", "pcs constraint order start openstack-info-clone then ClusterIP-osp Adding openstack-info-clone ClusterIP-osp (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint colocation add ClusterIP-osp with openstack-info-clone score=INFINITY", "pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.16.0.119", "pcs constraint order start ClusterIP-osp then ClusterIP Adding ClusterIP-osp ClusterIP (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint colocation add ClusterIP with ClusterIP-osp", "pcs constraint config Location Constraints: Ordering Constraints: start ClusterIP-osp then start ClusterIP (kind:Mandatory) start openstack-info-clone then start ClusterIP-osp (kind:Mandatory) Colocation Constraints: ClusterIP with ClusterIP-osp (score:INFINITY) ClusterIP-osp with openstack-info-clone (score:INFINITY)", "pcs status . . . Full List of Resources: * fenceopenstack (stonith:fence_openstack): Started node01 * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ] * ClusterIP-osp (ocf::heartbeat:openstack-virtual-ip): Started node03 * ClusterIP (ocf::heartbeat:IPaddr2): Started node03", "pcs resource describe openstack-floating-ip", "route -n | grep ^0.0.0.0 | awk '{print USD2}' 172.16.0.1", "openstack --os-cloud=ha-example subnet list +-------------------------------------+---+---------------+ | ID | | Subnet +-------------------------------------+---+---------------+ | 723c5a77-156d-4c3b-b53c-ee73a4f75185 | | 172.16.0.0/24 | +--------------------------------------+------------------+", "pcs resource create float-ip openstack-floating-ip cloud=\"ha-example\" ip_id=\"10.19.227.211\" subnet_id=\"723c5a77-156d-4c3b-b53c-ee73a4f75185\"", "pcs constraint order start openstack-info-clone then float-ip Adding openstack-info-clone float-ip (kind: Mandatory) (Options: first-action=start then-action=start", "pcs constraint colocation add float-ip with openstack-info-clone score=INFINITY", "pcs constraint config Location Constraints: Ordering Constraints: start openstack-info-clone then start float-ip (kind:Mandatory) Colocation Constraints: float-ip with openstack-info-clone (score:INFINITY)", "pcs status . . . Full List of Resources: * fenceopenstack (stonith:fence_openstack): Started node01 * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ] * float-ip (ocf::heartbeat:openstack-floating-ip): Started node02", "pcs resource describe openstack-cinder-volume", "openstack --os-cloud=ha-example volume list | ID | Name | | 23f67c9f-b530-4d44-8ce5-ad5d056ba926| testvolume-cinder-data-disk |", "openstack --os-cloud=ha-example volume show testvolume-cinder-data-disk", "pcs resource create cinder-vol openstack-cinder-volume volume_id=\"23f67c9f-b530-4d44-8ce5-ad5d056ba926\" cloud=\"ha-example\"", "pcs constraint order start openstack-info-clone then cinder-vol Adding openstack-info-clone cinder-vol (kind: Mandatory) (Options: first-action=start then-action=start", "pcs constraint colocation add cinder-vol with openstack-info-clone score=INFINITY", "pcs constraint config Location Constraints: Ordering Constraints: start openstack-info-clone then start cinder-vol (kind:Mandatory) Colocation Constraints: cinder-vol with openstack-info-clone (score:INFINITY)", "pcs status . . . Full List of Resources: * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ] * cinder-vol (ocf::heartbeat:openstack-cinder-volume): Started node03 * fenceopenstack (stonith:fence_openstack): Started node01" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/configuring-ha-cluster-resources-on-red-hat-openstack-platform_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
17.2.3. Using the rndc Utility
17.2.3. Using the rndc Utility The rndc utility is a command-line tool that allows you to administer the named service, both locally and from a remote machine. Its usage is as follows: 17.2.3.1. Configuring the Utility To prevent unauthorized access to the service, named must be configured to listen on the selected port (that is, 953 by default), and an identical key must be used by both the service and the rndc utility. Table 17.7. Relevant files Path Description /etc/named.conf The default configuration file for the named service. /etc/rndc.conf The default configuration file for the rndc utility. /etc/rndc.key The default key location. The rndc configuration is located in /etc/rndc.conf . If the file does not exist, the utility will use the key located in /etc/rndc.key , which was generated automatically during the installation process using the rndc-confgen -a command. The named service is configured using the controls statement in the /etc/named.conf configuration file as described in Section 17.2.1.2, "Other Statement Types" . Unless this statement is present, only the connections from the loopback address (that is, 127.0.0.1 ) will be allowed, and the key located in /etc/rndc.key will be used. For more information on this topic, see manual pages and the BIND 9 Administrator Reference Manual listed in Section 17.2.7, "Additional Resources" . Important To prevent unprivileged users from sending control commands to the service, make sure only root is allowed to read the /etc/rndc.key file: 17.2.3.2. Checking the Service Status To check the current status of the named service, use the following command: 17.2.3.3. Reloading the Configuration and Zones To reload both the configuration file and zones, type the following at a shell prompt: This will reload the zones while keeping all previously cached responses, so that you can make changes to the zone files without losing all stored name resolutions. To reload a single zone, specify its name after the reload command, for example: Finally, to reload the configuration file and newly added zones only, type: Note If you intend to manually modify a zone that uses Dynamic DNS (DDNS), make sure you run the freeze command first: Once you are finished, run the thaw command to allow the DDNS again and reload the zone: 17.2.3.4. Updating Zone Keys To update the DNSSEC keys and sign the zone, use the sign command. For example: Note that to sign a zone with the above command, the auto-dnssec option has to be set to maintain in the zone statement. For instance: 17.2.3.5. Enabling the DNSSEC Validation To enable the DNSSEC validation, type the following at a shell prompt: Similarly, to disable this option, type: See the options statement described in Section 17.2.1.1, "Common Statement Types" for information on how to configure this option in /etc/named.conf . 17.2.3.6. Enabling the Query Logging To enable (or disable in case it is currently enabled) the query logging, run the following command: To check the current setting, use the status command as described in Section 17.2.3.2, "Checking the Service Status" .
[ "rndc [ option ...] command [ command-option ]", "~]# chmod o-rwx /etc/rndc.key", "~]# rndc status version: 9.7.0-P2-RedHat-9.7.0-5.P2.el6 CPUs found: 1 worker threads: 1 number of zones: 16 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running", "~]# rndc reload server reload successful", "~]# rndc reload localhost zone reload up-to-date", "~]# rndc reconfig", "~]# rndc freeze localhost", "~]# rndc thaw localhost The zone reload and thaw was successful.", "~]# rndc sign localhost", "zone \"localhost\" IN { type master; file \"named.localhost\"; allow-update { none; }; auto-dnssec maintain; };", "~]# rndc validation on", "~]# rndc validation off", "~]# rndc querylog" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-bind-rndc
Chapter 1. The Image service (glance)
Chapter 1. The Image service (glance) The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or snapshot a server image, and immediately store it. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services. 1.1. Virtual machine image formats A virtual machine (VM) image is a file that contains a virtual disk with a bootable OS installed. Red Hat OpenStack Platform (RHOSP) supports VM images in different formats. The disk format of a VM image is the format of the underlying disk image. The container format indicates if the VM image is in a file format that also contains metadata about the VM. When you add an image to the Image service (glance), you can set the disk or container format for your image to any of the values in the following tables by using the --disk-format and --container-format command options with the glance image-create , glance image-create-via-import , and glance image-update commands. If you are not sure of the container format of your VM image, you can set it to bare . Table 1.1. Disk image formats Format Description aki Indicates an Amazon kernel image that is stored in the Image service. ami Indicates an Amazon machine image that is stored in the Image service. ari Indicates an Amazon ramdisk image that is stored in the Image service. iso Sector-by-sector copy of the data on a disk, stored in a binary file. Although an ISO file is not normally considered a VM image format, these files contain bootable file systems with an installed operating system, and you use them in the same way as other VM image files. ploop A disk format supported and used by Virtuozzo to run OS containers. qcow2 Supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher. raw Unstructured disk image format. vdi Supported by VirtualBox VM monitor and QEMU emulator. vhd Virtual Hard Disk. Used by VM monitors from VMware, VirtualBox, and others. vhdx Virtual Hard Disk v2. Disk image format with a larger storage capacity than VHD. vmdk Virtual Machine Disk. Disk image format that allows incremental backups of data changes from the time of the last backup. Table 1.2. Container image formats Format Description aki Indicates an Amazon kernel image that is stored in the Image service. ami Indicates an Amazon machine image that is stored in the Image service. ari Indicates an Amazon ramdisk image that is stored in the Image service. bare Indicates there is no container or metadata envelope for the image. docker Indicates a TAR archive of the file system of a Docker container that is stored in the Image service. ova Indicates an Open Virtual Appliance (OVA) TAR archive file that is stored in the Image service. This file is stored in the Open Virtualization Format (OVF) container file. ovf OVF container file format. Open standard for packaging and distributing virtual appliances or software to be run on virtual machines. 1.2. Supported Image service back ends The following Image service (glance) back-end scenarios are supported: RADOS Block Device (RBD) is the default back end when you use Ceph. RBD multi-store. Object Storage (swift). The Image service uses the Object Storage type and back end as the default. Block Storage (cinder). Each image is stored as a volume (image volume). By default, it is not possible for a user to create multiple instances or volumes from a volume-backed image. But you can configure both the Image service and the Block Storage back end to do this. For more information, see Enabling the creation of multiple instances or volumes from a volume-backed image . NFS Important Although NFS is a supported Image service deployment option, more robust options are available. NFS is not native to the Image service. When you mount an NFS share on the Image service, the Image service does not manage the operation. The Image service writes data to the file system but is unaware that the back end is an NFS share. In this type of deployment, the Image service cannot retry a request if the share fails. This means that when a failure occurs on the back end, the store might enter read-only mode, or it might continue to write data to the local file system, in which case you risk data loss. To recover from this situation, you must ensure that the share is mounted and in sync, and then restart the Image service. For these reasons, Red Hat does not recommend NFS as an Image service back end. However, if you do choose to use NFS as an Image service back end, some of the following best practices can help to mitigate risks: Use a reliable production-grade NFS back end. Ensure that you have a strong and reliable connection between Controller nodes and the NFS back end: Layer 2 (L2) network connectivity is recommended. Include monitoring and alerts for the mounted share. Set underlying file system permissions. Write permissions must be present in the shared file system that you use as a store. Ensure that the user and the group that the glance-api process runs on do not have write permissions on the mount point at the local file system. This means that the process can detect possible mount failure and put the store into read-only mode during a write attempt. 1.2.1. Enabling the creation of multiple instances or volumes from a volume-backed image When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user. When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type. Note By default, only the Block Storage project administrator can create volume types. Procedure Source the overcloud credentials file: Replace <credentials_file> with the name of your credentials file, for example, overcloudrc . Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows: If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the volume_backend_name property to this volume type. You might need to ask your Block Storage administrator for the correct volume_backend_name for your multi-attach volume type. For this example, we are using iscsi as the back-end name. To configure the Image service to use this Block Storage multi-attach volume type, you must add the following parameter to the end of the [default_backend]` section of the glance-api.conf file: cinder_volume_type = glance-multiattach 1.3. Image signing and verification Image signing and verification protects image integrity and authenticity by enabling deployers to sign images and save the signatures and public key certificates as image properties. Note Image signing and verification is not supported if Nova is using RADOS Block Device (RBD) to store virtual machines disks. For information on image signing and verification, see Validating Image service (glance) images in the Managing secrets with the Key Manager service guide. 1.4. Image format conversion You can convert images to a different format by activating the image conversion plugin when you import images to the Image service (glance). You can activate or deactivate the image conversion plugin based on your Red Hat OpenStack Platform (RHOSP) deployment configuration. The deployer configures the preferred format of images for the deployment. Internally, the Image service receives the bits of the image in a particular format and stores the bits in a temporary location. The Image service triggers the plugin to convert the image to the target format and move the image to a final destination. When the task is finished, the Image service deletes the temporary location. The Image service does not retain the format that was uploaded initially. You can trigger image conversion only when importing an image. It does not run when uploading an image. Use the Image service command-line client for image management. For example: Replace <name> with the name of your image. 1.5. Improving scalability with Image service caching Use the Image service (glance) API caching mechanism to store copies of images on Image service API servers and retrieve them automatically to improve scalability. With Image service caching, you can run glance-api on multiple hosts. This means that it does not need to retrieve the same image from back-end storage multiple times. Image service caching does not affect any Image service operations. Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates: Procedure In an environment file, set the value of the GlanceCacheEnabled parameter to true , which automatically sets the flavor value to keystone+cachemanagement in the glance-api.conf heat template: Include the environment file in the openstack overcloud deploy command when you redeploy the overcloud. Optional: Tune the glance_cache_pruner to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes: Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency: The size of the files that you want to cache in your environment. The amount of available file system space. The frequency at which the environment caches images. 1.6. Image pre-caching You can use Red Hat OpenStack Platform (RHOSP) director to pre-cache images as part of the glance-api service. Use the Image service (glance) command-line client for image management. 1.6.1. Configuring the default interval for periodic image pre-caching The Image service (glance) pre-caching periodic job runs every 300 seconds (5 minutes default time) on each controller node where the glance-api service is running. To change the default time, you can set the cache_prefetcher_interval parameter under the Default section in the glance-api.conf environment file. Procedure Add a new interval with the ExtraConfig parameter in an environment file on the undercloud according to your requirements: Replace <300> with the number of seconds that you want as an interval to pre-cache images. After you adjust the interval in the environment file in /home/stack/templates/ , log in as the stack user and deploy the configuration: Replace <env_file> with the name of the environment file that contains the ExtraConfig settings that you added. Important If you passed any extra environment files when you created the overcloud, pass them again here by using the -e option to avoid making undesired changes to the overcloud. Additional resources For more information about the openstack overcloud deploy command, see Deployment command in the Installing and managing Red Hat OpenStack Platform with director guide. 1.6.2. Preparing to use a periodic job to pre-cache an image To use a periodic job to pre-cache an image, you must use the glance-cache-manage command connected directly to the node where the glance_api service is running. Do not use a proxy, which hides the node that answers a service request. Because the undercloud might not have access to the network where the glance_api service is running, run commands on the first overcloud node, which is called controller-0 by default. Complete the following prerequisite procedure to ensure that you run commands from the correct host, have the necessary credentials, and are also running the glance-cache-manage commands from inside the glance-api container. Procedure Log in to the undercloud as the stack user and identify the provisioning IP address of controller-0 : To authenticate to the overcloud, copy the credentials that are stored in /home/stack/overcloudrc , by default, to controller-0 : Connect to controller-0 : On controller-0 as the tripleo-admin user, identify the IP address of the glance_api service . In the following example, the IP address is 172.25.1.105 : Because the glance-cache-manage command is only available in the glance_api container, create a script to exec into that container where the environment variables to authenticate to the overcloud are already set. Create a script called glance_pod.sh in /home/tripleo-admin on controller-0 with the following contents: Source the overcloudrc file and run the glance_pod.sh script to exec into the glance_api container with the necessary environment variables to authenticate to the overcloud Controller node. Use a command such as glance image-list to verify that the container can run authenticated commands against the overcloud. 1.6.3. Using a periodic job to pre-cache an image When you have completed the prerequisite procedure in Section 1.6.2, "Preparing to use a periodic job to pre-cache an image" , you can use a periodic job to pre-cache an image. Procedure As the admin user, queue an image to cache: Replace <host_ip> with the IP address of the Controller node where the glance-api container is running. Replace <image_id> with the ID of the image that you want to queue. When you have queued the images that you want to pre-cache, the cache_images periodic job prefetches all queued images concurrently. Note Because the image cache is local to each node, if your Red Hat OpenStack Platform (RHOSP) deployment is HA, with 3, 5, or 7 Controllers, then you must specify the host address with the --host option when you run the glance-cache-manage command. Run the following command to view the images in the image cache: Replace <host_ip> with the IP address of the host in your environment. 1.6.4. Image caching command options You can use the following glance-cache-manage command options to queue images for caching and manage cached images: list-cached to list all images that are currently cached. list-queued to list all images that are currently queued for caching. queue-image to queue an image for caching. delete-cached-image to purge an image from the cache. delete-all-cached-images to remove all images from the cache. delete-queued-image to delete an image from the cache queue. delete-all-queued-images to delete all images from the cache queue. 1.7. Using the Image service API to enable sparse image upload With the Image service (glance) API, you can use sparse image upload to reduce network traffic and save storage space. This feature is particularly useful in distributed compute node (DCN) environments. With a sparse image file, the Image service does not write null byte sequences. The Image service writes data with a given offset. Storage back ends interpret these offsets as null bytes that do not actually consume storage space. Use the Image service command-line client for image management. Limitations Sparse image upload is supported only with Ceph RADOS Block Device (RBD). Sparse image upload is not supported for file systems. Sparseness is not maintained during the transfer between the client and the Image service API. The image is sparsed at the Image service API level. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment uses RBD for the Image service back end. Procedure Log in to the undercloud node as the stack user. Source the stackrc credentials file: Create an environment file with the following content: Add your new environment file to the stack with your other environment files and deploy the overcloud: For more information about uploading images, see Uploading images to the Image service . Verification You can import an image and check its size to verify sparse image upload. The following procedure uses example commands. Replace the values with those from your environment where appropriate. Download the image file locally: Replace <file_location> with the location of the file. Replace <file_name> with the name of the file. For example: Check the disk size and the virtual size of the image to be uploaded: For example: Import the image: Record the image ID. It is required in a subsequent step. Verify that the image is imported and in an active state: From a Ceph Storage node, verify that the size of the image is less than the virtual size from the output of step 1: Optional: You can confirm that rbd_thin_provisioning is configured in the Image service configuration file on the Controller nodes: Use SSH to access a Controller node: Confirm that rbd_thin_provisioning equals True on that Controller node: 1.8. Secure metadef APIs In Red Hat OpenStack Platform (RHOSP), cloud administrators can define key value pairs and tag metadata with metadata definition (metadef) APIs. There is no limit on the number of metadef namespaces, objects, properties, resources, or tags that cloud administrators can create. Image service policies control metadef APIs. By default, only cloud administrators can create, update, or delete (CUD) metadef APIs. This limitation prevents metadef APIs from exposing information to unauthorized users and mitigates the risk of a malicious user filling the Image service (glance) database with unlimited resources, which can create a Denial of Service (DoS) style attack. However, cloud administrators can override the default policy. 1.9. Enabling metadef API access for cloud users Cloud administrators with users who depend on write access to metadata definition (metadef) APIs can make those APIs accessible to all users by overriding the default admin-only policy. In this type of configuration, however, there is the potential to unintentionally leak sensitive resource names, such as customer names and internal projects. Administrators must audit their systems to identify previously created resources that might be vulnerable even if only read-access is enabled for all users. Procedure As a cloud administrator, log in to the undercloud and create a file for policy overrides. For example: Configure the policy override file to allow metadef API read-write access to all users: Note You must configure all metadef policies to use rule:metadef_default . For information about policies and policy syntax, see this Policies chapter. Include the new policy file in the deployment command with the -e option when you deploy the overcloud:
[ "source ~/<credentials_file>", "cinder type-create glance-multiattach cinder type-key glance-multiattach set multiattach=\"<is> True\"", "cinder type-key glance-multiattach set volume_backend_name=iscsi", "glance image-create-via-import --disk-format qcow2 --container-format bare --name <name> --visibility public --import-method web-download --uri http://server/image.qcow2", "parameter_defaults: GlanceCacheEnabled: true", "parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'", "parameter_defaults: ControllerExtraConfig: glance::config::glance_api_config: DEFAULT/cache_prefetcher_interval: value: '<300>'", "openstack overcloud deploy --templates -e /home/stack/templates/<env_file>.yaml", "(undercloud) [stack@site-undercloud-0 ~]USD openstack server list -f value -c Name -c Networks | grep controller overcloud-controller-1 ctlplane=192.168.24.40 overcloud-controller-2 ctlplane=192.168.24.13 overcloud-controller-0 ctlplane=192.168.24.71 (undercloud) [stack@site-undercloud-0 ~]USD", "scp ~/overcloudrc [email protected]:/home/tripleo-admin/", "ssh [email protected]", "(overcloud) [root@controller-0 ~]# grep -A 10 '^listen glance_api' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg listen glance_api server central-controller0-0.internalapi.redhat.local 172.25.1.105:9292 check fall 5 inter 2000 rise 2", "sudo podman exec -ti -e NOVA_VERSION=USDNOVA_VERSION -e COMPUTE_API_VERSION=USDCOMPUTE_API_VERSION -e OS_USERNAME=USDOS_USERNAME -e OS_PROJECT_NAME=USDOS_PROJECT_NAME -e OS_USER_DOMAIN_NAME=USDOS_USER_DOMAIN_NAME -e OS_PROJECT_DOMAIN_NAME=USDOS_PROJECT_DOMAIN_NAME -e OS_NO_CACHE=USDOS_NO_CACHE -e OS_CLOUDNAME=USDOS_CLOUDNAME -e no_proxy=USDno_proxy -e OS_AUTH_TYPE=USDOS_AUTH_TYPE -e OS_PASSWORD=USDOS_PASSWORD -e OS_AUTH_URL=USDOS_AUTH_URL -e OS_IDENTITY_API_VERSION=USDOS_IDENTITY_API_VERSION -e OS_COMPUTE_API_VERSION=USDOS_COMPUTE_API_VERSION -e OS_IMAGE_API_VERSION=USDOS_IMAGE_API_VERSION -e OS_VOLUME_API_VERSION=USDOS_VOLUME_API_VERSION -e OS_REGION_NAME=USDOS_REGION_NAME glance_api /bin/bash", "[tripleo-admin@controller-0 ~]USD source overcloudrc (overcloudrc) [tripleo-admin@central-controller-0 ~]USD bash glance_pod.sh ()[glance@controller-0 /]USD", "()[glance@controller-0 /]USD glance image-list +--------------------------------------+----------------------------------+ | ID | Name | +--------------------------------------+----------------------------------+ | ad2f8daf-56f3-4e10-b5dc-d28d3a81f659 | cirros-0.4.0-x86_64-disk.img | +--------------------------------------+----------------------------------+ ()[glance@controller-0 /]USD", "glance-cache-manage --host=<host_ip> queue-image <image_id>", "glance-cache-manage --host=<host_ip> list-cached", "source stackrc", "parameter_defaults: GlanceSparseUploadEnabled: true", "openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e <new_environment_file>.yaml", "wget <file_location>/<file_name>", "wget https://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-1508.qcow2", "qemu-img info <file_name>", "qemu-img info CentOS-6-x86_64-GenericCloud-1508.qcow2 image: CentOS-6-x86_64-GenericCloud-1508.qcow2 file format: qcow2 virtual size: 8 GiB (8589934592 bytes) disk size: 1.09 GiB cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 1", "glance image-create-via-import --disk-format qcow2 --container-format bare --name centos_1 --file <file_name>", "glance image show <image_id>", "sudo rbd -p images diff <image_id> | awk '{ SUM += USD2 } END { print SUM/1024/1024/1024 \" GB\" }' 1.03906 GB", "ssh -A -t tripleo-admin@<controller_node_IP_address>", "sudo podman exec -it glance_api sh -c 'grep ^rbd_thin_provisioning /etc/glance/glance-api.conf'", "cat open-up-glance-api-metadef.yaml", "GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' } }", "openstack overcloud deploy -e open-up-glance-api-metadef.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_images/assembly_image-service_osp
Chapter 137. Hazelcast Queue Component
Chapter 137. Hazelcast Queue Component Available as of Camel version 2.7 The Hazelcast Queue component is one of Camel Hazelcast Components which allows you to access Hazelcast distributed queue. 137.1. Options The Hazelcast Queue component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast Queue endpoint is configured using URI syntax: with the following path and query parameters: 137.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 137.1.2. Query Parameters (16 parameters): Name Description Default Type defaultOperation (common) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (common) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (common) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pollingTimeout (consumer) Define the polling timeout of the Queue consumer in Poll mode 10000 long poolSize (consumer) Define the Pool size for Queue Consumer Executor 1 int queueConsumerMode (consumer) Define the Queue Consumer mode: Listen or Poll Listen HazelcastQueueConsumer Mode exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 137.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.hazelcast-queue.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-queue.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-queue.enabled Enable hazelcast-queue component true Boolean camel.component.hazelcast-queue.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-queue.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-queue.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 137.3. Queue producer - to("hazelcast-queue:foo") The queue producer provides 10 operations: * add * put * poll * peek * offer * remove value * remaining capacity * remove all * remove if * drain to * take * retain all 137.3.1. Sample for add : from("direct:add") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) .toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX); 137.3.2. Sample for put : from("direct:put") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT)) .toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX); 137.3.3. Sample for poll : from("direct:poll") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.POLL)) .toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX); 137.3.4. Sample for peek : from("direct:peek") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PEEK)) .toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX); 137.3.5. Sample for offer : from("direct:offer") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.OFFER)) .toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX); 137.3.6. Sample for removevalue : from("direct:removevalue") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE)) .toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX); 137.3.7. Sample for remaining capacity : from("direct:remaining-capacity").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMAINING_CAPACITY)).to( String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); 137.3.8. Sample for remove all : from("direct:removeAll").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_ALL)).to( String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); 137.3.9. Sample for remove if : from("direct:removeIf").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_IF)).to( String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); 137.3.10. Sample for drain to : from("direct:drainTo").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DRAIN_TO)).to( String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); 137.3.11. Sample for take : from("direct:take").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.TAKE)).to( String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); 137.3.12. Sample for retain all : from("direct:retainAll").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.RETAIN_ALL)).to( String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX)); 137.4. Queue consumer - from("hazelcast-queue:foo") The queue consumer provides two different modes: Poll Listen Sample for Poll mode fromF("hazelcast-%sfoo?queueConsumerMode=Poll", HazelcastConstants.QUEUE_PREFIX)).to("mock:result"); In this way the consumer will poll the queue and return the head of the queue or null after a timeout. In Listen mode instead the consumer will listen for events on queue. The queue consumer in Listen mode provides 2 operations: * add * remove Sample for Listen mode fromF("hazelcast-%smm", HazelcastConstants.QUEUE_PREFIX) .log("object...") .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) .log("...added") .to("mock:added") .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) .log("...removed") .to("mock:removed") .otherwise() .log("fail!");
[ "hazelcast-queue:cacheName", "from(\"direct:add\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) .toF(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX);", "from(\"direct:put\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT)) .toF(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX);", "from(\"direct:poll\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.POLL)) .toF(\"hazelcast:%sbar\", HazelcastConstants.QUEUE_PREFIX);", "from(\"direct:peek\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PEEK)) .toF(\"hazelcast:%sbar\", HazelcastConstants.QUEUE_PREFIX);", "from(\"direct:offer\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.OFFER)) .toF(\"hazelcast:%sbar\", HazelcastConstants.QUEUE_PREFIX);", "from(\"direct:removevalue\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE)) .toF(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX);", "from(\"direct:remaining-capacity\").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMAINING_CAPACITY)).to( String.format(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX));", "from(\"direct:removeAll\").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_ALL)).to( String.format(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX));", "from(\"direct:removeIf\").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_IF)).to( String.format(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX));", "from(\"direct:drainTo\").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DRAIN_TO)).to( String.format(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX));", "from(\"direct:take\").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.TAKE)).to( String.format(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX));", "from(\"direct:retainAll\").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.RETAIN_ALL)).to( String.format(\"hazelcast-%sbar\", HazelcastConstants.QUEUE_PREFIX));", "fromF(\"hazelcast-%sfoo?queueConsumerMode=Poll\", HazelcastConstants.QUEUE_PREFIX)).to(\"mock:result\");", "fromF(\"hazelcast-%smm\", HazelcastConstants.QUEUE_PREFIX) .log(\"object...\") .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) .log(\"...added\") .to(\"mock:added\") .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) .log(\"...removed\") .to(\"mock:removed\") .otherwise() .log(\"fail!\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-queue-component
High Availability Add-On Administration
High Availability Add-On Administration Red Hat Enterprise Linux 7 Configuring Red Hat High Availability deployments Steven Levine Red Hat Customer Content Services [email protected]
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/index
Chapter 6. Reference
Chapter 6. Reference 6.1. Artifact Repository Mirrors A repository in Maven holds build artifacts and dependencies of various types (all the project jars, library jar, plugins or any other project specific artifacts). It also specifies locations from where to download artifacts from, while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom repository (mirror). Benefits of using a mirror are: Availability of a synchronized mirror, which is geographically closer and faster. Ability to have greater control over the repository content. Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories. Improved build times. Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at http://10.0.0.1:8080/repository/internal/ , the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL environment variable to the build configuration of the application as follows: Identify the name of the build configuration to apply MAVEN_MIRROR_URL variable against: USD oc get bc -o name buildconfig/sso Update build configuration of sso with a MAVEN_MIRROR_URL environment variable USD oc set env bc/sso \ -e MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/" buildconfig "sso" updated Verify the setting USD oc set env bc/sso --list # buildconfigs sso MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/ Schedule new build of the application Note During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build. 6.2. Environment Variables 6.2.1. Information Environment Variables The following information environment variables are designed to convey information about the image and should not be modified by the user: Table 6.1. Information Environment Variables Variable Name Description Example Value AB_JOLOKIA_AUTH_OPENSHIFT - true AB_JOLOKIA_HTTPS - true AB_JOLOKIA_PASSWORD_RANDOM - true JBOSS_IMAGE_NAME Image name, same as "name" label. rh-sso-7/sso74-openj9-openshift-rhel8 JBOSS_IMAGE_VERSION Image version, same as "version" label. 7.4 JBOSS_MODULES_SYSTEM_PKGS - org.jboss.logmanager,jdk.nashorn.api 6.2.2. Configuration Environment Variables Configuration environment variables are designed to conveniently adjust the image without requiring a rebuild, and should be set by the user as desired. Table 6.2. Configuration Environment Variables Variable Name Description Example Value AB_JOLOKIA_AUTH_OPENSHIFT Switch on client authentication for OpenShift TLS communication. The value of this parameter can be a relative distinguished name which must be contained in a presented client's certificate. Enabling this parameter will automatically switch Jolokia into https communication mode. The default CA cert is set to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . true AB_JOLOKIA_CONFIG If set uses this file (including path) as Jolokia JVM agent properties (as described in Jolokia's reference manual ). If not set, the /opt/jolokia/etc/jolokia.properties file will be created using the settings as defined in this document, otherwise the rest of the settings in this document are ignored. /opt/jolokia/custom.properties AB_JOLOKIA_DISCOVERY_ENABLED Enable Jolokia discovery. Defaults to false . true AB_JOLOKIA_HOST Host address to bind to. Defaults to 0.0.0.0 . 127.0.0.1 AB_JOLOKIA_HTTPS Switch on secure communication with https. By default self-signed server certificates are generated if no serverCert configuration is given in AB_JOLOKIA_OPTS . NOTE: If the values is set to an empty string, https is turned off . If the value is set to a non empty string, https is turned on . true AB_JOLOKIA_ID Agent ID to use (USDHOSTNAME by default, which is the container id). openjdk-app-1-xqlsj AB_JOLOKIA_OFF If set disables activation of Jolokia (i.e. echos an empty value). By default, Jolokia is enabled. NOTE: If the values is set to an empty string, https is turned off . If the value is set to a non empty string, https is turned on . true AB_JOLOKIA_OPTS Additional options to be appended to the agent configuration. They should be given in the format "key=value, key=value, ...<200b> " backlog=20 AB_JOLOKIA_PASSWORD Password for basic authentication. By default authentication is switched off. mypassword AB_JOLOKIA_PASSWORD_RANDOM If set, a random value is generated for AB_JOLOKIA_PASSWORD , and it is saved in the /opt/jolokia/etc/jolokia.pw file. true AB_JOLOKIA_PORT Port to use (Default: 8778 ). 5432 AB_JOLOKIA_USER User for basic authentication. Defaults to jolokia . myusername CONTAINER_CORE_LIMIT A calculated core limit as described in CFS Bandwidth Control. 2 GC_ADAPTIVE_SIZE_POLICY_WEIGHT The weighting given to the current Garbage Collection (GC) time versus GC times. 90 GC_MAX_HEAP_FREE_RATIO Maximum percentage of heap free after GC to avoid shrinking. 40 GC_MAX_METASPACE_SIZE The maximum metaspace size. 100 GC_TIME_RATIO_MIN_HEAP_FREE_RATIO Minimum percentage of heap free after GC to avoid expansion. 20 GC_TIME_RATIO Specifies the ratio of the time spent outside the garbage collection (for example, the time spent for application execution) to the time spent in the garbage collection. 4 JAVA_DIAGNOSTICS Set this to get some diagnostics information to standard out when things are happening. true JAVA_INITIAL_MEM_RATIO This is used to calculate a default initial heap memory based the maximal heap memory. The default is 100 which means 100% of the maximal heap is used for the initial heap size. You can skip this mechanism by setting this value to 0 in which case no -Xms option is added. 100 JAVA_MAX_MEM_RATIO It is used to calculate a default maximal heap memory based on a containers restriction. If used in a Docker container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xmx is set to a ratio of the container available memory as set here. The default is 50 which means 50% of the available memory is used as an upper boundary. You can skip this mechanism by setting this value to 0 in which case no -Xmx option is added. 40 JAVA_OPTS_APPEND Server startup options. -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION For backwards compatability, set to true to use MyQueue and MyTopic as physical destination name defaults instead of queue/MyQueue and topic/MyTopic . false OPENSHIFT_KUBE_PING_LABELS Clustering labels selector. app=sso-app OPENSHIFT_KUBE_PING_NAMESPACE Clustering project namespace. myproject SCRIPT_DEBUG If set to true , ensurses that the bash scripts are executed with the -x option, printing the commands and their arguments as they are executed. true SSO_ADMIN_PASSWORD Password of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift Instructional message when the template is instantiated. adm-password SSO_ADMIN_USERNAME Username of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift Instructional message when the template is instantiated. admin SSO_HOSTNAME Custom hostname for the Red Hat Single Sign-On server. Not set by default . If not set, the request hostname SPI provider, which uses the request headers to determine the hostname of the Red Hat Single Sign-On server is used. If set, the fixed hostname SPI provider, with the hostname of the Red Hat Single Sign-On server set to the provided variable value, is used. See dedicated Customizing Hostname for the Red Hat Single Sign-On Server section for additional steps to be performed, when SSO_HOSTNAME variable is set. rh-sso-server.openshift.example.com SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. demo SSO_SERVICE_PASSWORD The password for the Red Hat Single Sign-On service user. mgmt-password SSO_SERVICE_USERNAME The username used to access the Red Hat Single Sign-On service. This is used by clients to create the application client(s) within the specified Red Hat Single Sign-On realm. This user is created if this environment variable is provided. sso-mgmtuser SSO_TRUSTSTORE The name of the truststore file within the secret. truststore.jks SSO_TRUSTSTORE_DIR Truststore directory. /etc/sso-secret-volume SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. mykeystorepass SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. truststore-secret Available application templates for Red Hat Single Sign-On for OpenShift can combine the aforementioned configuration variables with common OpenShift variables (for example APPLICATION_NAME or SOURCE_REPOSITORY_URL ), product specific variables (e.g. HORNETQ_CLUSTER_PASSWORD ), or configuration variables typical to database images (e.g. POSTGRESQL_MAX_CONNECTIONS ) yet. All of these different types of configuration variables can be adjusted as desired to achieve the deployed Red Hat Single Sign-On-enabled application will align with the intended use case as much as possible. The list of configuration variables, available for each category of application templates for Red Hat Single Sign-On-enabled applications, is described below. 6.2.3. Template variables for all Red Hat Single Sign-On images Table 6.3. Configuration Variables Available For All Red Hat Single Sign-On Images Variable Description APPLICATION_NAME The name for the application. DB_MAX_POOL_SIZE Sets xa-pool/max-pool-size for the configured datasource. DB_TX_ISOLATION Sets transaction-isolation for the configured datasource. DB_USERNAME Database user name. HOSTNAME_HTTP Custom hostname for http service route. Leave blank for default hostname, e.g.: <application-name>.<project>.<default-domain-suffix> . HOSTNAME_HTTPS Custom hostname for https service route. Leave blank for default hostname, e.g.: <application-name>.<project>.<default-domain-suffix> . HTTPS_KEYSTORE The name of the keystore file within the secret. If defined along with HTTPS_PASSWORD and HTTPS_NAME , enable HTTPS and set the SSL certificate key file to a relative path under USDJBOSS_HOME/standalone/configuration . HTTPS_KEYSTORE_TYPE The type of the keystore file (JKS or JCEKS). HTTPS_NAME The name associated with the server certificate (e.g. jboss ). If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE , enable HTTPS and set the SSL name. HTTPS_PASSWORD The password for the keystore and certificate (e.g. mykeystorepass ). If defined along with HTTPS_NAME and HTTPS_KEYSTORE , enable HTTPS and set the SSL key password. HTTPS_SECRET The name of the secret containing the keystore file. IMAGE_STREAM_NAMESPACE Namespace in which the ImageStreams for Red Hat Middleware images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you've installed the ImageStreams in a different namespace/project. JGROUPS_CLUSTER_PASSWORD JGroups cluster password. JGROUPS_ENCRYPT_KEYSTORE The name of the keystore file within the secret. JGROUPS_ENCRYPT_NAME The name associated with the server certificate (e.g. secret-key ). JGROUPS_ENCRYPT_PASSWORD The password for the keystore and certificate (e.g. password ). JGROUPS_ENCRYPT_SECRET The name of the secret containing the keystore file. SSO_ADMIN_USERNAME Username of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift instructional message when the template is instantiated. SSO_ADMIN_PASSWORD Password of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift instructional message when the template is instantiated. SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. SSO_SERVICE_USERNAME The username used to access the Red Hat Single Sign-On service. This is used by clients to create the application client(s) within the specified Red Hat Single Sign-On realm. This user is created if this environment variable is provided. SSO_SERVICE_PASSWORD The password for the Red Hat Single Sign-On service user. SSO_TRUSTSTORE The name of the truststore file within the secret. SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. 6.2.4. Template variables specific to sso74-openj9-postgresql , sso74-openj9-postgresql-persistent , and sso74-openj9-x509-postgresql-persistent Table 6.4. Configuration Variables Specific To Red Hat Single Sign-On-enabled PostgreSQL Applications With Ephemeral Or Persistent Storage Variable Description DB_USERNAME Database user name. DB_PASSWORD Database user password. DB_JNDI Database JNDI name used by application to resolve the datasource, e.g. java:/jboss/datasources/postgresql POSTGRESQL_MAX_CONNECTIONS The maximum number of client connections allowed. This also sets the maximum number of prepared transactions. POSTGRESQL_SHARED_BUFFERS Configures how much memory is dedicated to PostgreSQL for caching data. 6.2.5. Template variables for general eap64 and eap71 S2I images Table 6.5. Configuration Variables For EAP 6.4 and EAP 7 Applications Built Via S2I Variable Description APPLICATION_NAME The name for the application. ARTIFACT_DIR Artifacts directory. AUTO_DEPLOY_EXPLODED Controls whether exploded deployment content should be automatically deployed. CONTEXT_DIR Path within Git project to build; empty for root project directory. GENERIC_WEBHOOK_SECRET Generic build trigger secret. GITHUB_WEBHOOK_SECRET GitHub trigger secret. HORNETQ_CLUSTER_PASSWORD HornetQ cluster administrator password. HORNETQ_QUEUES Queue names. HORNETQ_TOPICS Topic names. HOSTNAME_HTTP Custom host name for http service route. Leave blank for default host name, e.g.: <application-name>.<project>.<default-domain-suffix> . HOSTNAME_HTTPS Custom host name for https service route. Leave blank for default host name, e.g.: <application-name>.<project>.<default-domain-suffix> . HTTPS_KEYSTORE_TYPE The type of the keystore file (JKS or JCEKS). HTTPS_KEYSTORE The name of the keystore file within the secret. If defined along with HTTPS_PASSWORD and HTTPS_NAME , enable HTTPS and set the SSL certificate key file to a relative path under USDJBOSS_HOME/standalone/configuration . HTTPS_NAME The name associated with the server certificate (e.g. jboss ). If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE , enable HTTPS and set the SSL name. HTTPS_PASSWORD The password for the keystore and certificate (e.g. mykeystorepass ). If defined along with HTTPS_NAME and HTTPS_KEYSTORE , enable HTTPS and set the SSL key password. HTTPS_SECRET The name of the secret containing the keystore file. IMAGE_STREAM_NAMESPACE Namespace in which the ImageStreams for Red Hat Middleware images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you've installed the ImageStreams in a different namespace/project. JGROUPS_CLUSTER_PASSWORD JGroups cluster password. JGROUPS_ENCRYPT_KEYSTORE The name of the keystore file within the secret. JGROUPS_ENCRYPT_NAME The name associated with the server certificate (e.g. secret-key ). JGROUPS_ENCRYPT_PASSWORD The password for the keystore and certificate (e.g. password ). JGROUPS_ENCRYPT_SECRET The name of the secret containing the keystore file. SOURCE_REPOSITORY_REF Git branch/tag reference. SOURCE_REPOSITORY_URL Git source URI for application. 6.2.6. Template variables specific to eap64-sso-s2i and eap71-sso-s2i for automatic client registration Table 6.6. Configuration Variables For EAP 6.4 and EAP 7 Red Hat Single Sign-On-enabled Applications Built Via S2I Variable Description SSO_URL Red Hat Single Sign-On server location. SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. SSO_USERNAME The username used to access the Red Hat Single Sign-On service. This is used to create the application client(s) within the specified Red Hat Single Sign-On realm. This should match the SSO_SERVICE_USERNAME specified through one of the sso74-openj9- templates. SSO_PASSWORD The password for the Red Hat Single Sign-On service user. SSO_PUBLIC_KEY Red Hat Single Sign-On public key. Public key is recommended to be passed into the template to avoid man-in-the-middle security attacks. SSO_SECRET The Red Hat Single Sign-On client secret for confidential access. SSO_SERVICE_URL Red Hat Single Sign-On service location. SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. SSO_TRUSTSTORE The name of the truststore file within the secret. SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. SSO_BEARER_ONLY Red Hat Single Sign-On client access type. SSO_DISABLE_SSL_CERTIFICATE_VALIDATION If true SSL communication between EAP and the Red Hat Single Sign-On Server is insecure (i.e. certificate validation is disabled with curl) SSO_ENABLE_CORS Enable CORS for Red Hat Single Sign-On applications. 6.2.7. Template variables specific to eap64-sso-s2i and eap71-sso-s2i for automatic client registration with SAML clients Table 6.7. Configuration Variables For EAP 6.4 and EAP 7 Red Hat Single Sign-On-enabled Applications Built Via S2I Using SAML Protocol Variable Description SSO_SAML_CERTIFICATE_NAME The name associated with the server certificate. SSO_SAML_KEYSTORE_PASSWORD The password for the keystore and certificate. SSO_SAML_KEYSTORE The name of the keystore file within the secret. SSO_SAML_KEYSTORE_SECRET The name of the secret containing the keystore file. SSO_SAML_LOGOUT_PAGE Red Hat Single Sign-On logout page for SAML applications. 6.3. Exposed Ports Port Number Description 8443 HTTPS 8778 Jolokia monitoring
[ "oc get bc -o name buildconfig/sso", "oc set env bc/sso -e MAVEN_MIRROR_URL=\"http://10.0.0.1:8080/repository/internal/\" buildconfig \"sso\" updated", "oc set env bc/sso --list buildconfigs sso MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/red_hat_single_sign-on_for_openshift_on_eclipse_openj9/reference
Chapter 6. Channels
Chapter 6. Channels 6.1. Channels and subscriptions Channels are custom resources that define a single event-forwarding and persistence layer. After events have been sent to a channel from an event source or producer, these events can be sent to multiple Knative services or other sinks by using a subscription. You can create channels by instantiating a supported Channel object, and configure re-delivery attempts by modifying the delivery spec in a Subscription object. After you create a Channel object, a mutating admission webhook adds a set of spec.channelTemplate properties for the Channel object based on the default channel implementation. For example, for an InMemoryChannel default implementation, the Channel object looks as follows: apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default spec: channelTemplate: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel The channel controller then creates the backing channel instance based on the spec.channelTemplate configuration. Note The spec.channelTemplate properties cannot be changed after creation, because they are set by the default channel mechanism rather than by the user. When this mechanism is used with the preceding example, two objects are created: a generic backing channel and an InMemoryChannel channel. If you are using a different default channel implementation, the InMemoryChannel is replaced with one that is specific to your implementation. For example, with the Knative broker for Apache Kafka, the KafkaChannel channel is created. The backing channel acts as a proxy that copies its subscriptions to the user-created channel object, and sets the user-created channel object status to reflect the status of the backing channel. 6.1.1. Channel implementation types OpenShift Serverless supports the InMemoryChannel and KafkaChannel channels implementations. The InMemoryChannel channel is recommended for development use only due to its limitations. You can use the KafkaChannel channel for a production environment. The following are limitations of InMemoryChannel type channels: No event persistence is available. If a pod goes down, events on that pod are lost. InMemoryChannel channels do not implement event ordering, so two events that are received in the channel at the same time can be delivered to a subscriber in any order. If a subscriber rejects an event, there are no re-delivery attempts by default. You can configure re-delivery attempts by modifying the delivery spec in the Subscription object. 6.2. Creating channels Channels are custom resources that define a single event-forwarding and persistence layer. After events have been sent to a channel from an event source or producer, these events can be sent to multiple Knative services or other sinks by using a subscription. You can create channels by instantiating a supported Channel object, and configure re-delivery attempts by modifying the delivery spec in a Subscription object. 6.2.1. Creating a channel by using the Administrator perspective After Knative Eventing is installed on your cluster, you can create a channel by using the Administrator perspective. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Create list, select Channel . You will be directed to the Channel page. Select the type of Channel object that you want to create in the Type list. Note Currently only InMemoryChannel channel objects are supported by default. Knative channels for Apache Kafka are available if you have installed the Knative broker implementation for Apache Kafka on OpenShift Serverless. Click Create . 6.2.2. Creating a channel by using the Developer perspective Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a channel. After Knative Eventing is installed on your cluster, you can create a channel by using the web console. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to +Add Channel . Select the type of Channel object that you want to create in the Type list. Click Create . Verification Confirm that the channel now exists by navigating to the Topology page. 6.2.3. Creating a channel by using the Knative CLI Using the Knative ( kn ) CLI to create channels provides a more streamlined and intuitive user interface than modifying YAML files directly. You can use the kn channel create command to create a channel. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a channel: USD kn channel create <channel_name> --type <channel_type> The channel type is optional, but where specified, must be given in the format Group:Version:Kind . For example, you can create an InMemoryChannel object: USD kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel Example output Channel 'mychannel' created in namespace 'default'. Verification To confirm that the channel now exists, list the existing channels and inspect the output: USD kn channel list Example output kn channel list NAME TYPE URL AGE READY REASON mychannel InMemoryChannel http://mychannel-kn-channel.default.svc.cluster.local 93s True Deleting a channel Delete a channel: USD kn channel delete <channel_name> 6.2.4. Creating a default implementation channel by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe channels declaratively and in a reproducible manner. To create a serverless channel by using YAML, you must create a YAML file that defines a Channel object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Channel object as a YAML file: apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default Apply the YAML file: USD oc apply -f <filename> 6.2.5. Creating a channel for Apache Kafka by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe channels declaratively and in a reproducible manner. You can create a Knative Eventing channel that is backed by Kafka topics by creating a Kafka channel. To create a Kafka channel by using YAML, you must create a YAML file that defines a KafkaChannel object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a KafkaChannel object as a YAML file: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1 Important Only the v1beta1 version of the API for KafkaChannel objects on OpenShift Serverless is supported. Do not use the v1alpha1 version of this API, as this version is now deprecated. Apply the KafkaChannel YAML file: USD oc apply -f <filename> 6.2.6. steps After you have created a channel, you can connect the channel to a sink so that the sink can receive events. Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. 6.3. Connecting channels to sinks Events that have been sent to a channel from an event source or producer can be forwarded to one or more sinks by using subscriptions . You can create subscriptions by configuring a Subscription object, which specifies the channel and the sink (also known as a subscriber ) that consumes the events sent to that channel. 6.3.1. Creating a subscription by using the Developer perspective After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a subscription. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console. You have created an event sink, such as a Knative service, and a channel. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to the Topology page. Create a subscription using one of the following methods: Hover over the channel that you want to create a subscription for, and drag the arrow. The Add Subscription option is displayed. Select your sink in the Subscriber list. Click Add . If the service is available in the Topology view under the same namespace or project as the channel, click on the channel that you want to create a subscription for, and drag the arrow directly to a service to immediately create a subscription from the channel to that service. Verification After the subscription has been created, you can see it represented as a line that connects the channel to the service in the Topology view: 6.3.2. Creating a subscription by using YAML After you have created a channel and an event sink, you can create a subscription to enable event delivery. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe subscriptions declaratively and in a reproducible manner. To create a subscription by using YAML, you must create a YAML file that defines a Subscription object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Subscription object: Create a YAML file and copy the following sample code into it: apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 Name of the subscription. 2 Configuration settings for the channel that the subscription connects to. 3 Configuration settings for event delivery. This tells the subscription what happens to events that cannot be delivered to the subscriber. When this is configured, events that failed to be consumed are sent to the deadLetterSink . The event is dropped, no re-delivery of the event is attempted, and an error is logged in the system. The deadLetterSink value must be a Destination . 4 Configuration settings for the subscriber. This is the event sink that events are delivered to from the channel. Apply the YAML file: USD oc apply -f <filename> 6.3.3. Creating a subscription by using the Knative CLI After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the Knative ( kn ) CLI to create subscriptions provides a more streamlined and intuitive user interface than modifying YAML files directly. You can use the kn subscription create command with the appropriate flags to create a subscription. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a subscription to connect a sink to a channel: USD kn subscription create <subscription_name> \ --channel <group:version:kind>:<channel_name> \ 1 --sink <sink_prefix>:<sink_name> \ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3 1 --channel specifies the source for cloud events that should be processed. You must provide the channel name. If you are not using the default InMemoryChannel channel that is backed by the Channel custom resource, you must prefix the channel name with the <group:version:kind> for the specified channel type. For example, this will be messaging.knative.dev:v1beta1:KafkaChannel for an Apache Kafka backed channel. 2 --sink specifies the target destination to which the event should be delivered. By default, the <sink_name> is interpreted as a Knative service of this name, in the same namespace as the subscription. You can specify the type of the sink by using one of the following prefixes: ksvc A Knative service. channel A channel that should be used as destination. Only default channel types can be referenced here. broker An Eventing broker. 3 Optional: --sink-dead-letter is an optional flag that can be used to specify a sink which events should be sent to in cases where events fail to be delivered. For more information, see the OpenShift Serverless Event delivery documentation. Example command USD kn subscription create mysubscription --channel mychannel --sink ksvc:event-display Example output Subscription 'mysubscription' created in namespace 'default'. Verification To confirm that the channel is connected to the event sink, or subscriber , by a subscription, list the existing subscriptions and inspect the output: USD kn subscription list Example output NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True Deleting a subscription Delete a subscription: USD kn subscription delete <subscription_name> 6.3.4. Creating a subscription by using the Administrator perspective After you have created a channel and an event sink, also known as a subscriber , you can create a subscription to enable event delivery. Subscriptions are created by configuring a Subscription object, which specifies the channel and the subscriber to deliver events to. You can also specify some subscriber-specific options, such as how to handle failures. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have created a Knative channel. You have created a Knative service to use as a subscriber. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Channel tab, select the Options menu for the channel that you want to add a subscription to. Click Add Subscription in the list. In the Add Subscription dialogue box, select a Subscriber for the subscription. The subscriber is the Knative service that receives events from the channel. Click Add . 6.3.5. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. 6.4. Default channel implementation You can use the default-ch-webhook config map to specify the default channel implementation of Knative Eventing. You can specify the default channel implementation for the entire cluster or for one or more namespaces. Currently the InMemoryChannel and KafkaChannel channel types are supported. 6.4.1. Configuring the default channel implementation Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. If you want to use Knative channels for Apache Kafka as the default channel implementation, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource to add configuration details for the default-ch-webhook config map: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 default-ch-webhook: 2 default-ch-config: | clusterDefault: 3 apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel spec: delivery: backoffDelay: PT0.5S backoffPolicy: exponential retry: 5 namespaceDefaults: 4 my-namespace: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel spec: numPartitions: 1 replicationFactor: 1 1 In spec.config , you can specify the config maps that you want to add modified configurations for. 2 The default-ch-webhook config map can be used to specify the default channel implementation for the cluster or for one or more namespaces. 3 The cluster-wide default channel type configuration. In this example, the default channel implementation for the cluster is InMemoryChannel . 4 The namespace-scoped default channel type configuration. In this example, the default channel implementation for the my-namespace namespace is KafkaChannel . Important Configuring a namespace-specific default overrides any cluster-wide settings. 6.5. Security configuration for channels 6.5.1. Configuring TLS authentication for Knative channels for Apache Kafka Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for the Knative broker implementation for Apache Kafka. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a Kafka cluster CA certificate stored as a .pem file. You have a Kafka cluster client certificate and a key stored as .pem files. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as secrets in your chosen namespace: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-file=ca.crt=caroot.pem \ --from-file=user.crt=certificate.pem \ --from-file=user.key=key.pem Important Use the key names ca.crt , user.crt , and user.key . Do not change them. Start editing the KnativeKafka custom resource: USD oc edit knativekafka Reference your secret and the namespace of the secret: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true Note Make sure to specify the matching port in the bootstrap server. For example: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: tls-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094 enabled: true source: enabled: true 6.5.2. Configuring SASL authentication for Knative channels for Apache Kafka Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a username and password for a Kafka cluster. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as secrets in your chosen namespace: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-file=ca.crt=caroot.pem \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ --from-literal=user="my-sasl-user" Use the key names ca.crt , password , and sasl.mechanism . Do not change them. If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-literal=tls.enabled=true \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ --from-literal=user="my-sasl-user" Start editing the KnativeKafka custom resource: USD oc edit knativekafka Reference your secret and the namespace of the secret: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true Note Make sure to specify the matching port in the bootstrap server. For example: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: scram-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093 enabled: true source: enabled: true
[ "apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default spec: channelTemplate: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel", "kn channel create <channel_name> --type <channel_type>", "kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel", "Channel 'mychannel' created in namespace 'default'.", "kn channel list", "kn channel list NAME TYPE URL AGE READY REASON mychannel InMemoryChannel http://mychannel-kn-channel.default.svc.cluster.local 93s True", "kn channel delete <channel_name>", "apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default", "oc apply -f <filename>", "apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1", "oc apply -f <filename>", "apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f <filename>", "kn subscription create <subscription_name> --channel <group:version:kind>:<channel_name> \\ 1 --sink <sink_prefix>:<sink_name> \\ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3", "kn subscription create mysubscription --channel mychannel --sink ksvc:event-display", "Subscription 'mysubscription' created in namespace 'default'.", "kn subscription list", "NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True", "kn subscription delete <subscription_name>", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 default-ch-webhook: 2 default-ch-config: | clusterDefault: 3 apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel spec: delivery: backoffDelay: PT0.5S backoffPolicy: exponential retry: 5 namespaceDefaults: 4 my-namespace: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel spec: numPartitions: 1 replicationFactor: 1", "oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem", "oc edit knativekafka", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: tls-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094 enabled: true source: enabled: true", "oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"", "oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"", "oc edit knativekafka", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: scram-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093 enabled: true source: enabled: true" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/eventing/channels
Extension APIs
Extension APIs OpenShift Container Platform 4.14 Reference guide for extension APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/extension_apis/index
Chapter 8. Upgrading the Migration Toolkit for Containers
Chapter 8. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.17 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 8.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.17 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.17 by using the Operator Lifecycle Manager. Important When upgrading the MTC by using the Operator Lifecycle Manager, you must use a supported migration path. Migration paths Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Migrating from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. MTC 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x may be used. However, it must be the same MTC version on both source & destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 8.2. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 3 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml If you have previously added the OpenShift Container Platform 3 cluster to the MTC web console, you must update the service account token in the web console because the upgrade process deletes and restores the openshift-migration namespace: Obtain the service account token by entering the following command: USD oc sa get-token migration-controller -n openshift-migration In the MTC web console, click Clusters . Click the Options menu to the cluster and select Edit . Enter the new service account token in the Service account token field. Click Update cluster and then click Close . Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 8.3. Upgrading MTC 1.3 to 1.8 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.8, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Important Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Upgrading MTC 1.7.x to 1.8.x requires manually updating the OADP channel from stable-1.0 to stable-1.2 in order to successfully complete the upgrade from 1.7.x to 1.8.x. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration
[ "podman login registry.redhat.io", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./", "oc replace --force -f operator.yml", "oc scale -n openshift-migration --replicas=0 deployment/migration-operator", "oc scale -n openshift-migration --replicas=1 deployment/migration-operator", "oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "oc create -f controller.yml", "oc sa get-token migration-controller -n openshift-migration", "oc get pods -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "spec: indirectImageMigration: true indirectVolumeMigration: true", "oc replace -f migplan.yaml -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/upgrading-3-4
5.5. Booleans
5.5. Booleans Booleans allow parts of SELinux policy to be changed at runtime, without any knowledge of SELinux policy writing. This allows changes, such as allowing services access to NFS volumes, without reloading or recompiling SELinux policy. 5.5.1. Listing Booleans For a list of Booleans, an explanation of what each one is, and whether they are on or off, run the semanage boolean -l command as the Linux root user. The following example does not list all Booleans: The SELinux boolean column lists Boolean names. The Description column lists whether the Booleans are on or off, and what they do. In the following example, the ftp_home_dir Boolean is off, preventing the FTP daemon ( vsftpd ) from reading and writing to files in user home directories: The getsebool -a command lists Booleans, whether they are on or off, but does not give a description of each one. The following example does not list all Booleans: Run the getsebool boolean-name command to only list the status of the boolean-name Boolean: Use a space-separated list to list multiple Booleans:
[ "~]# semanage boolean -l SELinux boolean Description ftp_home_dir -> off Allow ftp to read and write files in the user home directories xen_use_nfs -> off Allow xen to manage nfs files xguest_connect_network -> on Allow xguest to configure Network Manager", "ftp_home_dir -> off Allow ftp to read and write files in the user home directories", "~]USD getsebool -a allow_console_login --> off allow_cvs_read_shadow --> off allow_daemons_dump_core --> on", "~]USD getsebool allow_console_login allow_console_login --> off", "~]USD getsebool allow_console_login allow_cvs_read_shadow allow_daemons_dump_core allow_console_login --> off allow_cvs_read_shadow --> off allow_daemons_dump_core --> on" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-working_with_selinux-booleans
Preface
Preface As an OpenShift cluster administrator, you can configure the model registry feature for OpenShift AI administrators and data scientists to use.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/configuring_the_model_registry_component/pr01
Preface
Preface Preface
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/preface
26.9. Replacing the Web Server's and LDAP Server's Certificate
26.9. Replacing the Web Server's and LDAP Server's Certificate To replace the service certificates for the web server and LDAP server: Request a new certificate. You can do this using: the integrated CA: see Section 24.1.1, "Requesting New Certificates for a User, Host, or Service" for details. an external CA: generate a private key and certificate signing request (CSR). For example, using OpenSSL: Submit the CSR to the external CA. The process differs depending on the service to be used as the external CA. Replace the Apache web server's private key and certificate: Replace the LDAP server's private key and certificate:
[ "openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout new.key -out new.csr -subj '/CN= idmserver.idm.example.com ,O= IDM.EXAMPLE.COM '", "ipa-server-certinstall -w --pin= password new.key new.crt", "ipa-server-certinstall -d --pin= password new.key new.cert" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/replace-http-ldap-cert
Chapter 17. Customizing the system in the installer
Chapter 17. Customizing the system in the installer During the customization phase of the installation, you must perform certain configuration tasks to enable the installation of Red Hat Enterprise Linux. These tasks include: Configuring the storage and assign mount points. Selecting a base environment with software to be installed. Setting a password for the root user or creating a local user. Optionally, you can further customize the system, for example, by configuring system settings and connecting the host to a network. 17.1. Setting the installer language You can select the language to be used by the installation program before starting the installation. Prerequisites You have created installation media. You have specified an installation source if you are using the Boot ISO image file. You have booted the installation. Procedure After you select Install Red hat Enterprise Linux option from the boot menu, the Welcome to Red Hat Enterprise Screen appears. From the left-hand pane of the Welcome to Red Hat Enterprise Linux window, select a language. Alternatively, search the preferred language by using the text box. Note A language is pre-selected by default. If network access is configured, that is, if you booted from a network server instead of local media, the pre-selected language is determined by the automatic location detection feature of the GeoIP module. If you use the inst.lang= option on the boot command line or in your PXE server configuration, then the language that you define with the boot option is selected. From the right-hand pane of the Welcome to Red Hat Enterprise Linux window, select a location specific to your region. Click Continue to proceed to the graphical installations window. If you are installing a pre-release version of Red Hat Enterprise Linux, a warning message is displayed about the pre-release status of the installation media. To continue with the installation, click I want to proceed , or To quit the installation and reboot the system, click I want to exit . 17.2. Configuring the storage devices You can install Red Hat Enterprise Linux on a large variety of storage devices. You can configure basic, locally accessible, storage devices in the Installation Destination window. Basic storage devices directly connected to the local system, such as disks and solid-state drives, are displayed in the Local Standard Disks section of the window. On 64-bit IBM Z, this section contains activated Direct Access Storage Devices (DASDs). Warning A known issue prevents DASDs configured as HyperPAV aliases from being automatically attached to the system after the installation is complete. These storage devices are available during the installation, but are not immediately accessible after you finish installing and reboot. To attach HyperPAV alias devices, add them manually to the /etc/dasd.conf configuration file of the system. 17.2.1. Configuring installation destination You can use the Installation Destination window to configure the storage options, for example, the disks that you want to use as the installation target for your Red Hat Enterprise Linux installation. You must select at least one disk. Prerequisites The Installation Summary window is open. Ensure to back up your data if you plan to use a disk that already contains data. For example, if you want to shrink an existing Microsoft Windows partition and install Red Hat Enterprise Linux as a second system, or if you are upgrading a release of Red Hat Enterprise Linux. Manipulating partitions always carries a risk. For example, if the process is interrupted or fails for any reason data on the disk can be lost. Procedure From the Installation Summary window, click Installation Destination . Perform the following operations in the Installation Destination window opens: From the Local Standard Disks section, select the storage device that you require; a white check mark indicates your selection. Disks without a white check mark are not used during the installation process; they are ignored if you choose automatic partitioning, and they are not available in manual partitioning. The Local Standard Disks shows all locally available storage devices, for example, SATA, IDE and SCSI disks, USB flash and external disks. Any storage devices connected after the installation program has started are not detected. If you use a removable drive to install Red Hat Enterprise Linux, your system is unusable if you remove the device. Optional: Click the Refresh link in the lower right-hand side of the window if you want to configure additional local storage devices to connect new disks. The Rescan Disks dialog box opens. Click Rescan Disks and wait until the scanning process completes. All storage changes that you make during the installation are lost when you click Rescan Disks . Click OK to return to the Installation Destination window. All detected disks including any new ones are displayed under the Local Standard Disks section. Optional: Click Add a disk to add a specialized storage device. The Storage Device Selection window opens and lists all storage devices that the installation program has access to. Optional: Under Storage Configuration , select the Automatic radio button for automatic partitioning. You can also configure custom partitioning. For more details, see Configuring manual partitioning . Optional: Select I would like to make additional space available to reclaim space from an existing partitioning layout. For example, if a disk you want to use already has a different operating system and you want to make this system's partitions smaller to allow more room for Red Hat Enterprise Linux. Optional: Select Encrypt my data to encrypt all partitions except the ones needed to boot the system (such as /boot ) using Linux Unified Key Setup (LUKS). Encrypting your disk to add an extra layer of security. Click Done . The Disk Encryption Passphrase dialog box opens. Type your passphrase in the Passphrase and Confirm fields. Click Save Passphrase to complete disk encryption. Warning If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, if you perform a Kickstart installation, you can save encryption passphrases and create backup encryption passphrases during the installation. For more information, see the Automatically installing RHEL document. Optional: Click the Full disk summary and bootloader link in the lower left-hand side of the window to select which storage device contains the boot loader. For more information, see Configuring boot loader . In most cases it is sufficient to leave the boot loader in the default location. Some configurations, for example, systems that require chain loading from another boot loader require the boot drive to be specified manually. Click Done . Optional: The Reclaim Disk Space dialog box appears if you selected automatic partitioning and the I would like to make additional space available option, or if there is not enough free space on the selected disks to install Red Hat Enterprise Linux. It lists all configured disk devices and all partitions on those devices. The dialog box displays information about the minimal disk space the system needs for an installation with the currently selected package set and how much space you have reclaimed. To start the reclaiming process: Review the displayed list of available storage devices. The Reclaimable Space column shows how much space can be reclaimed from each entry. Select a disk or partition to reclaim space. Use the Shrink button to use free space on a partition while preserving the existing data. Use the Delete button to delete that partition or all partitions on a selected disk including existing data. Use the Delete all button to delete all existing partitions on all disks including existing data and make this space available to install Red Hat Enterprise Linux. Click Reclaim space to apply the changes and return to graphical installations. No disk changes are made until you click Begin Installation on the Installation Summary window. The Reclaim Space dialog only marks partitions for resizing or deletion; no action is performed. Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 17.2.2. Special cases during installation destination configuration Following are some special cases to consider when you are configuring installation destinations: Some BIOS types do not support booting from a RAID card. In these instances, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. It is necessary to use an internal disk for partition creation with problematic RAID cards. A /boot partition is also necessary for software RAID setups. If you choose to partition your system automatically, you should manually edit your /boot partition. To configure the Red Hat Enterprise Linux boot loader to chain load from a different boot loader, you must specify the boot drive manually by clicking the Full disk summary and bootloader link from the Installation Destination window. When you install Red Hat Enterprise Linux on a system with both multipath and non-multipath storage devices, the automatic partitioning layout in the installation program creates volume groups that contain a mix of multipath and non-multipath devices. This defeats the purpose of multipath storage. Select either multipath or non-multipath devices on the Installation Destination window. Alternatively, proceed to manual partitioning. 17.2.3. Configuring boot loader Red Hat Enterprise Linux uses GRand Unified Bootloader version 2 ( GRUB2 ) as the boot loader for AMD64 and Intel 64, IBM Power Systems, and ARM. For 64-bit IBM Z, the zipl boot loader is used. The boot loader is the first program that runs when the system starts and is responsible for loading and transferring control to an operating system. GRUB2 can boot any compatible operating system (including Microsoft Windows) and can also use chain loading to transfer control to other boot loaders for unsupported operating systems. Warning Installing GRUB2 may overwrite your existing boot loader. If an operating system is already installed, the Red Hat Enterprise Linux installation program attempts to automatically detect and configure the boot loader to start the other operating system. If the boot loader is not detected, you can manually configure any additional operating systems after you finish the installation. If you are installing a Red Hat Enterprise Linux system with more than one disk, you might want to manually specify the disk where you want to install the boot loader. Procedure From the Installation Destination window, click the Full disk summary and bootloader link. The Selected Disks dialog box opens. The boot loader is installed on the device of your choice, or on a UEFI system; the EFI system partition is created on the target device during guided partitioning. To change the boot device, select a device from the list and click Set as Boot Device . You can set only one device as the boot device. To disable a new boot loader installation, select the device currently marked for boot and click Do not install boot loader . This ensures GRUB2 is not installed on any device. Warning If you choose not to install a boot loader, you cannot boot the system directly and you must use another boot method, such as a standalone commercial boot loader application. Use this option only if you have another way to boot your system. The boot loader may also require a special partition to be created, depending on if your system uses BIOS or UEFI firmware, or if the boot drive has a GUID Partition Table (GPT) or a Master Boot Record (MBR, also known as msdos ) label. If you use automatic partitioning, the installation program creates the partition. 17.2.4. Storage device selection The storage device selection window lists all storage devices that the installation program can access. Depending on your system and available hardware, some tabs might not be displayed. The devices are grouped under the following tabs: Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. The installation program only detects multipath storage devices with serial numbers that are 16 or 32 characters long. Other SAN Devices Devices available on a Storage Area Network (SAN). Firmware RAID Storage devices attached to a firmware RAID controller. NVDIMM Devices Under specific circumstances, Red Hat Enterprise Linux 9 can boot and run from (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures. IBM Z Devices Storage devices, or Logical Units (LUNs), DASD, attached through the zSeries Linux FCP (Fiber Channel Protocol) driver. 17.2.5. Filtering storage devices In the storage device selection window you can filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN). Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the Search by tab to search by port, target, LUN, or WWID. Searching by WWID or LUN requires additional values in the corresponding input text fields. Select the option that you require from the Search drop-down menu. Click Find to start the search. Each device is presented on a separate row with a corresponding check box. Select the check box to enable the device that you require during the installation process. Later in the installation process you can choose to install Red Hat Enterprise Linux on any of the selected devices, and you can choose to mount any of the other selected devices as part of the installed system automatically. Selected devices are not automatically erased by the installation process and selecting a device does not put the data stored on the device at risk. Note You can add devices to the system after installation by modifying the /etc/fstab file. Click Done to return to the Installation Destination window. Any storage devices that you do not select are hidden from the installation program entirely. To chain load the boot loader from a different boot loader, select all the devices present. 17.2.6. Using advanced storage options To use an advanced storage device, you can configure an iSCSI (SCSI over TCP/IP) target or FCoE (Fibre Channel over Ethernet) SAN (Storage Area Network). To use iSCSI storage devices for the installation, the installation program must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a user name and password for Challenge Handshake Authentication Protocol (CHAP) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached (reverse CHAP), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP. Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the user name and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps to add all required iSCSI storage. You cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. 17.2.6.1. Discovering and starting an iSCSI session The Red Hat Enterprise Linux installer can discover and log in to iSCSI disks in two ways: iSCSI Boot Firmware Table (iBFT) When the installer starts, it checks if the BIOS or add-on boot ROMs of the system support iBFT. It is a BIOS extension for systems that can boot from iSCSI. If the BIOS supports iBFT, the installer reads the iSCSI target information for the configured boot disk from the BIOS and logs in to this target, making it available as an installation target. To automatically connect to an iSCSI target, activate a network device for accessing the target. To do so, use the ip=ibft boot option. For more information, see Network boot options . Discover and add iSCSI targets manually You can discover and start an iSCSI session to identify available iSCSI targets (network storage devices) in the installer's graphical user interface. Prerequisites The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add iSCSI target . The Add iSCSI Storage Target window opens. Important You cannot place the /boot partition on iSCSI targets that you have manually added using this method - an iSCSI target containing a /boot partition must be configured for use with iBFT. However, in instances where the installed system is expected to boot from iSCSI with iBFT configuration provided by a method other than firmware iBFT, for example using iPXE, you can remove the /boot partition restriction using the inst.nonibftiscsiboot installer boot option. Enter the IP address of the iSCSI target in the Target IP Address field. Type a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN entry contains the following information: The string iqn. (note the period). A date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. Your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage . A colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example :diskarrays-sn-a8675309 . A complete IQN is as follows: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 . The installation program pre populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information about IQNs, see 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from tools.ietf.org and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from tools.ietf.org. Select the Discovery Authentication Type drop-down menu to specify the type of authentication to use for iSCSI discovery. The following options are available: No credentials CHAP pair CHAP pair and a reverse pair Do one of the following: If you selected CHAP pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password fields. If you selected CHAP pair and a reverse pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password field, and the user name and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Optional: Select the Bind targets to network interfaces check box. Click Start Discovery . The installation program attempts to discover an iSCSI target based on the information provided. If discovery succeeds, the Add iSCSI Storage Target window displays a list of all iSCSI nodes discovered on the target. Select the check boxes for the node that you want to use for installation. The Node login authentication type menu contains the same options as the Discovery Authentication Type menu. However, if you need credentials for discovery authentication, use the same credentials to log in to a discovered node. Click the additional Use the credentials from discovery drop-down menu. When you provide the proper credentials, the Log In button becomes available. Click Log In to initiate an iSCSI session. While the installer uses iscsiadm to find and log into iSCSI targets, iscsiadm automatically stores any information about these targets in the iscsiadm iSCSI database. The installer then copies this database to the installed system and marks any iSCSI targets that are not used for root partition, so that the system automatically logs in to them when it starts. If the root partition is placed on an iSCSI target, initrd logs into this target and the installer does not include this target in start up scripts to avoid multiple attempts to log into the same target. 17.2.6.2. Configuring FCoE parameters You can discover the FCoE (Fibre Channel over Ethernet) devices from the Installation Destination window by configuring the FCoE parameters accordingly. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add FCoE SAN . A dialog box opens for you to configure network interfaces for discovering FCoE storage devices. Select a network interface that is connected to an FCoE switch in the NIC drop-down menu. Click Add FCoE disk(s) to scan the network for SAN devices. Select the required check boxes: Use DCB: Data Center Bridging (DCB) is a set of enhancements to the Ethernet protocols designed to increase the efficiency of Ethernet connections in storage networks and clusters. Select the check box to enable or disable the installation program's awareness of DCB. Enable this option only for network interfaces that require a host-based DCBX client. For configurations on interfaces that use a hardware DCBX client, disable the check box. Use auto vlan: Auto VLAN is enabled by default and indicates whether VLAN discovery should be performed. If this check box is enabled, then the FIP (FCoE Initiation Protocol) VLAN discovery protocol runs on the Ethernet interface when the link configuration has been validated. If they are not already configured, network interfaces for any discovered FCoE VLANs are automatically created and FCoE instances are created on the VLAN interfaces. Discovered FCoE devices are displayed under the Other SAN Devices tab in the Installation Destination window. 17.2.6.3. Configuring DASD storage devices You can discover and configure the DASD storage devices from the Installation Destination window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add DASD ECKD . The Add DASD Storage Target dialog box opens and prompts you to specify a device number, such as 0.0.0204 , and attach additional DASDs that were not detected when the installation started. Type the device number of the DASD that you want to attach in the Device number field. Click Start Discovery . If a DASD with the specified device number is found and if it is not already attached, the dialog box closes and the newly-discovered drives appear in the list of drives. You can then select the check boxes for the required devices and click Done . The new DASDs are available for selection, marked as DASD device 0.0. xxxx in the Local Standard Disks section of the Installation Destination window. If you entered an invalid device number, or if the DASD with the specified device number is already attached to the system, an error message appears in the dialog box, explaining the error and prompting you to try again with a different device number. Additional resources Preparing an ECKD type DASD for use 17.2.6.4. Configuring FCP devices FCP devices enable 64-bit IBM Z to use SCSI devices rather than, or in addition to, Direct Access Storage Device (DASD) devices. FCP devices provide a switched fabric topology that enables 64-bit IBM Z systems to use SCSI LUNs as disk devices in addition to traditional DASD devices. Prerequisites The Installation Summary window is open. For an FCP-only installation, you have removed the DASD= option from the CMS configuration file or the rd.dasd= option from the parameter file to indicate that no DASD is present. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add ZFCP LUN . The Add zFCP Storage Target dialog box opens allowing you to add a FCP (Fibre Channel Protocol) storage device. 64-bit IBM Z requires that you enter any FCP device manually so that the installation program can activate FCP LUNs. You can enter FCP devices either in the graphical installation, or as a unique parameter entry in the parameter or CMS configuration file. The values that you enter must be unique to each site that you configure. Type the 4 digit hexadecimal device number in the Device number field. When installing RHEL-9.0 or older releases or if the zFCP device is not configured in NPIV mode, or when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter, provide the following values: Type the 16 digit hexadecimal World Wide Port Number (WWPN) in the WWPN field. Type the 16 digit hexadecimal FCP LUN identifier in the LUN field. Click Start Discovery to connect to the FCP device. The newly-added devices are displayed in the IBM Z tab of the Installation Destination window. Use only lower-case letters in hex values. If you enter an incorrect value and click Start Discovery , the installation program displays a warning. You can edit the configuration information and retry the discovery attempt. For more information about these values, consult the hardware documentation and check with your system administrator. 17.2.7. Installing to an NVDIMM device Non-Volatile Dual In-line Memory Module (NVDIMM) devices combine the performance of RAM with disk-like data persistence when no power is supplied. Under specific circumstances, Red Hat Enterprise Linux 9 can boot and run from NVDIMM devices. 17.2.7.1. Criteria for using an NVDIMM device as an installation target You can install Red Hat Enterprise Linux 9 to Non-Volatile Dual In-line Memory Module (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures, supported by the nd_pmem driver. Conditions for using an NVDIMM device as storage To use an NVDIMM device as storage, the following conditions must be satisfied: The architecture of the system is Intel 64 or AMD64. The NVDIMM device is configured to sector mode. The installation program can reconfigure NVDIMM devices to this mode. The NVDIMM device must be supported by the nd_pmem driver. Conditions for booting from an NVDIMM Device Booting from an NVDIMM device is possible under the following conditions: All conditions for using the NVDIMM device as storage are satisfied. The system uses UEFI. The NVDIMM device must be supported by firmware available on the system, or by an UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The NVDIMM device must be made available under a namespace. Utilize the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. 17.2.7.2. Configuring an NVDIMM device using the graphical installation mode A Non-Volatile Dual In-line Memory Module (NVDIMM) device must be properly configured for use by Red Hat Enterprise Linux 9 using the graphical installation. Warning Reconfiguration of a NVDIMM device process destroys any data stored on the device. Prerequisites A NVDIMM device is present on the system and satisfies all the other conditions for usage as an installation target. The installation has booted and the Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the NVDIMM Devices tab. To reconfigure a device, select it from the list. If a device is not listed, it is not in sector mode. Click Reconfigure NVDIMM . A reconfiguration dialog opens. Enter the sector size that you require and click Start Reconfiguration . The supported sector sizes are 512 and 4096 bytes. When reconfiguration completes click OK . Select the device check box. Click Done to return to the Installation Destination window. The NVDIMM device that you reconfigured is displayed in the Specialized & Network Disks section. Click Done to return to the Installation Summary window. The NVDIMM device is now available for you to select as an installation target. Additionally, if the device meets the requirements for booting, you can set the device as a boot device. 17.3. Configuring the root user and creating local accounts 17.3.1. Configuring a root password You must configure a root password to finish the installation process and to log in to the administrator (also known as superuser or root) account that is used for system administration tasks. These tasks include installing and updating software packages and changing system-wide configuration such as network and firewall settings, storage options, and adding or modifying users, groups and file permissions. To gain root privileges to the installed systems, you can either use a root account or create a user account with administrative privileges (member of the wheel group). The root account is always created during the installation. Switch to the administrator account only when you need to perform a task that requires administrator access. Warning The root account has complete control over the system. If unauthorized personnel gain access to the account, they can access or delete users' personal files. Procedure From the Installation Summary window, select User Settings > Root Password . The Root Password window opens. Type your password in the Root Password field. The requirements for creating a strong root password are: Must be at least eight characters long May contain numbers, letters (upper and lower case) and symbols Is case-sensitive Type the same password in the Confirm field. Optional: Select the Lock root account option to disable the root access to the system. Optional: Select the Allow root SSH login with password option to enable SSH access (with password) to this system as a root user. By default the password-based SSH root access is disabled. Click Done to confirm your root password and return to the Installation Summary window. If you proceed with a weak password, you must click Done twice. 17.3.2. Creating a user account Create a user account to finish the installation. If you do not create a user account, you must log in to the system as root directly, which is not recommended. Procedure On the Installation Summary window, select User Settings > User Creation . The Create User window opens. Type the user account name in to the Full name field, for example: John Smith. Type the username in to the User name field, for example: jsmith. The User name is used to log in from a command line; if you install a graphical environment, then your graphical login manager uses the Full name . Select the Make this user administrator check box if the user requires administrative rights (the installation program adds the user to the wheel group ). An administrator user can use the sudo command to perform tasks that are only available to root using the user password, instead of the root password. This may be more convenient, but it can also cause a security risk. Select the Require a password to use this account check box. If you give administrator privileges to a user, ensure the account is password protected. Never give a user administrator privileges without assigning a password to the account. Type a password into the Password field. Type the same password into the Confirm password field. Click Done to apply the changes and return to the Installation Summary window. 17.3.3. Editing advanced user settings This procedure describes how to edit the default settings for the user account in the Advanced User Configuration dialog box. Procedure On the Create User window, click Advanced . Edit the details in the Home directory field, if required. The field is populated by default with /home/ username . In the User and Groups IDs section you can: Select the Specify a user ID manually check box and use + or - to enter the required value. The default value is 1000. User IDs (UIDs) 0-999 are reserved by the system so they cannot be assigned to a user. Select the Specify a group ID manually check box and use + or - to enter the required value. The default group name is the same as the user name, and the default Group ID (GID) is 1000. GIDs 0-999 are reserved by the system so they can not be assigned to a user group. Specify additional groups as a comma-separated list in the Group Membership field. Groups that do not already exist are created; you can specify custom GIDs for additional groups in parentheses. If you do not specify a custom GID for a new group, the new group receives a GID automatically. The user account created always has one default group membership (the user's default group with an ID set in the Specify a group ID manually field). Click Save Changes to apply the updates and return to the Create User window. 17.4. Configuring manual partitioning You can use manual partitioning to configure your disk partitions and mount points and define the file system that Red Hat Enterprise Linux is installed on. Before installation, you should consider whether you want to use partitioned or unpartitioned disk devices. For more information about the advantages and disadvantages to using partitioning on LUNs, either directly or with LVM, see the Red Hat Knowledgebase solution advantages and disadvantages to using partitioning on LUNs . You have different partitioning and storage options available, including Standard Partitions , LVM , and LVM thin provisioning . These options provide various benefits and configurations for managing your system's storage effectively. Standard partition A standard partition contains a file system or swap space. Standard partitions are most commonly used for /boot and the BIOS Boot and EFI System partitions . You can use the LVM logical volumes in most other uses. LVM Choosing LVM (or Logical Volume Management) as the device type creates an LVM logical volume. LVM improves performance when using physical disks, and it allows for advanced setups such as using multiple physical disks for one mount point, and setting up software RAID for increased performance, reliability, or both. LVM thin provisioning Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can dynamically expand the pool when needed for cost-effective allocation of storage space. An installation of Red Hat Enterprise Linux requires a minimum of one partition but uses at least the following partitions or volumes: / , /home , /boot , and swap . You can also create additional partitions and volumes as you require. To prevent data loss it is recommended that you back up your data before proceeding. If you are upgrading or creating a dual-boot system, you should back up any data you want to keep on your storage devices. 17.4.1. Recommended partitioning scheme Create separate file systems at the following mount points. However, if required, you can also create the file systems at /usr , /var , and /tmp mount points. /boot / (root) /home swap /boot/efi PReP This partition scheme is recommended for bare metal deployments and it does not apply to virtual and cloud deployments. /boot partition - recommended size at least 1 GiB The partition mounted on /boot contains the operating system kernel, which allows your system to boot Red Hat Enterprise Linux 9, along with files used during the bootstrap process. Due to the limitations of most firmwares, create a small partition to hold these. In most scenarios, a 1 GiB boot partition is adequate. Unlike other mount points, using an LVM volume for /boot is not possible - /boot must be located on a separate disk partition. If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In such a case, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. Warning Normally, the /boot partition is created automatically by the installation program. However, if the / (root) partition is larger than 2 TiB and (U)EFI is used for booting, you need to create a separate /boot partition that is smaller than 2 TiB to boot the machine successfully. Ensure the /boot partition is located within the first 2 TB of the disk while manual partitioning. Placing the /boot partition beyond the 2 TB boundary might result in a successful installation, but the system fails to boot because BIOS cannot read the /boot partition beyond this limit. root - recommended size of 10 GiB This is where " / ", or the root directory, is located. The root directory is the top-level of the directory structure. By default, all files are written to this file system unless a different file system is mounted in the path being written to, for example, /boot or /home . While a 5 GiB root file system allows you to install a minimal installation, it is recommended to allocate at least 10 GiB so that you can install as many package groups as you want. Do not confuse the / directory with the /root directory. The /root directory is the home directory of the root user. The /root directory is sometimes referred to as slash root to distinguish it from the root directory. /home - recommended size at least 1 GiB To store user data separately from system data, create a dedicated file system for the /home directory. Base the file system size on the amount of data that is stored locally, number of users, and so on. You can upgrade or reinstall Red Hat Enterprise Linux 9 without erasing user data files. If you select automatic partitioning, it is recommended to have at least 55 GiB of disk space available for the installation, to ensure that the /home file system is created. swap partition - recommended size at least 1 GiB Swap file systems support virtual memory; data is written to a swap file system when there is not enough RAM to store the data your system is processing. Swap size is a function of system memory workload, not total system memory and therefore is not equal to the total system memory size. It is important to analyze what applications a system will be running and the load those applications will serve in order to determine the system memory workload. Application providers and developers can provide guidance. When the system runs out of swap space, the kernel terminates processes as the system RAM memory is exhausted. Configuring too much swap space results in storage devices being allocated but idle and is a poor use of resources. Too much swap space can also hide memory leaks. The maximum size for a swap partition and other additional information can be found in the mkswap(8) manual page. The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and if you want sufficient memory for your system to hibernate. If you let the installation program partition your system automatically, the swap partition size is established using these guidelines. Automatic partitioning setup assumes hibernation is not in use. The maximum size of the swap partition is limited to 10 percent of the total size of the disk, and the installation program cannot create swap partitions more than 1TiB. To set up enough swap space to allow for hibernation, or if you want to set the swap partition size to more than 10 percent of the system's storage space, or more than 1TiB, you must edit the partitioning layout manually. Table 17.1. Recommended system swap space Amount of RAM in the system Recommended swap space Recommended swap space if allowing for hibernation Less than 2 GiB 2 times the amount of RAM 3 times the amount of RAM 2 GiB - 8 GiB Equal to the amount of RAM 2 times the amount of RAM 8 GiB - 64 GiB 4 GiB to 0.5 times the amount of RAM 1.5 times the amount of RAM More than 64 GiB Workload dependent (at least 4GiB) Hibernation not recommended /boot/efi partition - recommended size of 200 MiB UEFI-based AMD64, Intel 64, and 64-bit ARM require a 200 MiB EFI system partition. The recommended minimum size is 200 MiB, the default size is 600 MiB, and the maximum size is 600 MiB. BIOS systems do not require an EFI system partition. At the border between each range, for example, a system with 2 GiB, 8 GiB, or 64 GiB of system RAM, discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space can lead to better performance. Distributing swap space over multiple storage devices - particularly on systems with fast drives, controllers and interfaces - also improves swap space performance. Many systems have more partitions and volumes than the minimum required. Choose partitions based on your particular system needs. If you are unsure about configuring partitions, accept the automatic default partition layout provided by the installation program. Note Only assign storage capacity to those partitions you require immediately. You can allocate free space at any time, to meet needs as they occur. PReP boot partition - recommended size of 4 to 8 MiB When installing Red Hat Enterprise Linux on IBM Power System servers, the first partition of the disk should include a PReP boot partition. This contains the GRUB boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 17.4.2. Supported hardware storage It is important to understand how storage technologies are configured and how support for them may have changed between major versions of Red Hat Enterprise Linux. Hardware RAID Any RAID functions provided by the mainboard of your computer, or attached controller cards, need to be configured before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux. Software RAID On systems with more than one disk, you can use the Red Hat Enterprise Linux installation program to operate several of the drives as a Linux software RAID array. With a software RAID array, RAID functions are controlled by the operating system rather than the dedicated hardware. Note When a pre-existing RAID array's member devices are all unpartitioned disks/drives, the installation program treats the array as a disk and there is no method to remove the array. USB Disks You can connect and configure external USB storage after installation. Most devices are recognized by the kernel, but some devices may not be recognized. If it is not a requirement to configure these disks during installation, disconnect them to avoid potential problems. NVDIMM devices To use a Non-Volatile Dual In-line Memory Module (NVDIMM) device as storage, the following conditions must be satisfied: The architecture of the system is Intel 64 or AMD64. The device is configured to sector mode. Anaconda can reconfigure NVDIMM devices to this mode. The device must be supported by the nd_pmem driver. Booting from an NVDIMM device is possible under the following additional conditions: The system uses UEFI. The device must be supported by firmware available on the system, or by a UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The device must be made available under a namespace. To take advantage of the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. Note The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. Considerations for Intel BIOS RAID Sets Red Hat Enterprise Linux uses mdraid for installing on Intel BIOS RAID sets. These sets are automatically detected during the boot process and their device node paths can change across several booting processes. Replace device node paths (such as /dev/sda ) with file system labels or device UUIDs. You can find the file system labels and device UUIDs using the blkid command. 17.4.3. Starting manual partitioning You can partition the disks based on your requirements by using manual partitioning. Prerequisites The Installation Summary screen is open. All disks are available to the installation program. Procedure Select disks for installation: Click Installation Destination to open the Installation Destination window. Select the disks that you require for installation by clicking the corresponding icon. A selected disk has a check-mark displayed on it. Under Storage Configuration , select the Custom radio-button. Optional: To enable storage encryption with LUKS, select the Encrypt my data check box. Click Done . If you selected to encrypt the storage, a dialog box for entering a disk encryption passphrase opens. Type in the LUKS passphrase: Enter the passphrase in the two text fields. To switch keyboard layout, use the keyboard icon. Warning In the dialog box for entering the passphrase, you cannot change the keyboard layout. Select the English keyboard layout to enter the passphrase in the installation program. Click Save Passphrase . The Manual Partitioning window opens. Detected mount points are listed in the left-hand pane. The mount points are organized by detected operating system installations. As a result, some file systems may be displayed multiple times if a partition is shared among several installations. Select the mount points in the left pane; the options that can be customized are displayed in the right pane. Optional: If your system contains existing file systems, ensure that enough space is available for the installation. To remove any partitions, select them in the list and click the - button. The dialog has a check box that you can use to remove all other partitions used by the system to which the deleted partition belongs. Optional: If there are no existing partitions and you want to create a set of partitions as a starting point, select your preferred partitioning scheme from the left pane (default for Red Hat Enterprise Linux is LVM) and click the Click here to create them automatically link. Note A /boot partition, a / (root) volume, and a swap volume proportional to the size of the available storage are created and listed in the left pane. These are the file systems for a typical installation, but you can add additional file systems and mount points. Click Done to confirm any changes and return to the Installation Summary window. 17.4.4. Supported file systems When configuring manual partitioning, you can optimize performance, ensure compatibility, and effectively manage disk space by utilizing the various file systems and partition types available in Red Hat Enterprise Linux. xfs XFS is a highly scalable, high-performance file system that supports file systems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes), and directory structures containing tens of millions of entries. XFS also supports metadata journaling, which facilitates quicker crash recovery. The maximum supported size of a single XFS file system is 500 TB. XFS is the default file system on Red Hat Enterprise Linux. The XFS filesystem cannot be shrunk to get free space. ext4 The ext4 file system is based on the ext3 file system and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. The maximum supported size of a single ext4 file system is 50 TB. ext3 The ext3 file system is based on the ext2 file system and has one main advantage - journaling. Using a journaling file system reduces the time spent recovering a file system after it terminates unexpectedly, as there is no need to check the file system for metadata consistency by running the fsck utility every time. ext2 An ext2 file system supports standard Unix file types, including regular files, directories, or symbolic links. It provides the ability to assign long file names, up to 255 characters. swap Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. vfat The VFAT file system is a Linux file system that is compatible with Microsoft Windows long file names on the FAT file system. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr and so on. BIOS Boot A very small partition required for booting from a device with a GUID partition table (GPT) on BIOS systems and UEFI systems in BIOS compatibility mode. EFI System Partition A small partition required for booting a device with a GUID partition table (GPT) on a UEFI system. PReP This small boot partition is located on the first partition of the disk. The PReP boot partition contains the GRUB2 boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 17.4.5. Adding a mount point file system You can add multiple mount point file systems. You can use any of the file systems and partition types available, such as XFS, ext4, ext3, ext2, swap, VFAT, and specific partitions like BIOS Boot, EFI System Partition, and PReP to effectively configure your system's storage. Prerequisites You have planned your partitions. Ensure you haven't specified mount points at paths with symbolic links, such as /var/mail , /usr/tmp , /lib , /sbin , /lib64 , and /bin . The payload, including RPM packages, depends on creating symbolic links to specific directories. Procedure Click + to create a new mount point file system. The Add a New Mount Point dialog opens. Select one of the preset paths from the Mount Point drop-down menu or type your own; for example, select / for the root partition or /boot for the boot partition. Enter the size of the file system in to the Desired Capacity field; for example, 2GiB . If you do not specify a value in Desired Capacity , or if you specify a size bigger than available space, then all remaining free space is used. Click Add mount point to create the partition and return to the Manual Partitioning window. 17.4.6. Configuring storage for a mount point file system You can set the partitioning scheme for each mount point that was created manually. The available options are Standard Partition , LVM , and LVM Thin Provisioning . Btfrs support has been removed in Red Hat Enterprise Linux 9. Note The /boot partition is always located on a standard partition, regardless of the value selected. Procedure To change the devices that a single non-LVM mount point should be located on, select the required mount point from the left-hand pane. Under the Device(s) heading, click Modify . The Configure Mount Point dialog opens. Select one or more devices and click Select to confirm your selection and return to the Manual Partitioning window. Click Update Settings to apply the changes. In the lower left-hand side of the Manual Partitioning window, click the storage device selected link to open the Selected Disks dialog and review disk information. Optional: Click the Rescan button (circular arrow button) to refresh all local disks and partitions; this is only required after performing advanced partition configuration outside the installation program. Clicking the Rescan Disks button resets all configuration changes made in the installation program. 17.4.7. Customizing a mount point file system You can customize a partition or volume if you want to set specific settings. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex as these directories contain critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system is unable to boot, or hangs with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories below them. For example, a separate partition for /var/www works successfully. Procedure From the left pane, select the mount point. Figure 17.1. Customizing Partitions From the right-hand pane, you can customize the following options: Enter the file system mount point into the Mount Point field. For example, if a file system is the root file system, enter / ; enter /boot for the /boot file system, and so on. For a swap file system, do not set the mount point as setting the file system type to swap is sufficient. Enter the size of the file system in the Desired Capacity field. You can use common size units such as KiB or GiB. The default is MiB if you do not set any other unit. Select the device type that you require from the drop-down Device Type menu: Standard Partition , LVM , or LVM Thin Provisioning . Note RAID is available only if two or more disks are selected for partitioning. If you choose RAID , you can also set the RAID Level . Similarly, if you select LVM , you can specify the Volume Group . Select the Encrypt check box to encrypt the partition or volume. You must set a password later in the installation program. The LUKS Version drop-down menu is displayed. Select the LUKS version that you require from the drop-down menu. Select the appropriate file system type for this partition or volume from the File system drop-down menu. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr , and so on. Select the Reformat check box to format an existing partition, or clear the Reformat check box to retain your data. The newly-created partitions and volumes must be reformatted, and the check box cannot be cleared. Type a label for the partition in the Label field. Use labels to easily recognize and address individual partitions. Type a name in the Name field. The standard partitions are named automatically when they are created and you cannot edit the names of standard partitions. For example, you cannot edit the /boot name sda1 . Click Update Settings to apply your changes and if required, select another partition to customize. Changes are not applied until you click Begin Installation from the Installation Summary window. Optional: Click Reset All to discard your partition changes. Click Done when you have created and customized all file systems and mount points. If you choose to encrypt a file system, you are prompted to create a passphrase. A Summary of Changes dialog box opens, displaying a summary of all storage actions for the installation program. Click Accept Changes to apply the changes and return to the Installation Summary window. 17.4.8. Preserving the /home directory In a Red Hat Enterprise Linux 9 graphical installation, you can preserve the /home directory that was used on your RHEL 8 system. Preserving /home is only possible if the /home directory is located on a separate /home partition on your RHEL 8 system. Preserving the /home directory that includes various configuration settings, makes it possible that the GNOME Shell environment on the new Red Hat Enterprise Linux 9 system is set in the same way as it was on your RHEL 8 system. Note that this applies only for users on Red Hat Enterprise Linux 9 with the same user name and ID as on the RHEL 8 system. Prerequisites You have RHEL 8 installed on your computer. The /home directory is located on a separate /home partition on your RHEL 8 system. The Red Hat Enterprise Linux 9 Installation Summary window is open. Procedure Click Installation Destination to open the Installation Destination window. Under Storage Configuration , select the Custom radio button. Click Done . Click Done , the Manual Partitioning window opens. Choose the /home partition, fill in /home under Mount Point: and clear the Reformat check box. Figure 17.2. Ensuring that /home is not formatted Optional: You can also customize various aspects of the /home partition required for your Red Hat Enterprise Linux 9 system as described in Customizing a mount point file system . However, to preserve /home from your RHEL 8 system, it is necessary to clear the Reformat check box. After you customized all partitions according to your requirements, click Done . The Summary of changes dialog box opens. Verify that the Summary of changes dialog box does not show any change for /home . This means that the /home partition is preserved. Click Accept Changes to apply the changes, and return to the Installation Summary window. 17.4.9. Creating a software RAID during the installation Redundant Arrays of Independent Disks (RAID) devices are constructed from multiple storage devices that are arranged to provide increased performance and, in some configurations, greater fault tolerance. A RAID device is created in one step and disks are added or removed as necessary. You can configure one RAID partition for each physical disk in your system, so that the number of disks available to the installation program determines the levels of RAID device available. For example, if your system has two disks, you cannot create a RAID 10 device, as it requires a minimum of three separate disks. To optimize your system's storage performance and reliability, RHEL supports software RAID 0 , RAID 1 , RAID 4 , RAID 5 , RAID 6 , and RAID 10 types with LVM and LVM Thin Provisioning to set up storage on the installed system. Note On 64-bit IBM Z, the storage subsystem uses RAID transparently. You do not have to configure software RAID manually. Prerequisites You have selected two or more disks for installation before RAID configuration options are visible. Depending on the RAID type you want to create, at least two disks are required. You have created a mount point. By configuring a mount point, you can configure the RAID device. You have selected the Custom radio button on the Installation Destination window. Procedure From the left pane of the Manual Partitioning window, select the required partition. Under the Device(s) section, click Modify . The Configure Mount Point dialog box opens. Select the disks that you want to include in the RAID device and click Select . Click the Device Type drop-down menu and select RAID . Click the File System drop-down menu and select your preferred file system type. Click the RAID Level drop-down menu and select your preferred level of RAID. Click Update Settings to save your changes. Click Done to apply the settings to return to the Installation Summary window. Additional resources Creating a RAID LV with DM integrity Managing RAID 17.4.10. Creating an LVM logical volume Logical Volume Manager (LVM) presents a simple logical view of underlying physical storage space, such as disks or LUNs. Partitions on physical storage are represented as physical volumes that you can group together into volume groups. You can divide each volume group into multiple logical volumes, each of which is analogous to a standard disk partition. Therefore, LVM logical volumes function as partitions that can span multiple physical disks. Important LVM configuration is available only in the graphical installation program. During text-mode installation, LVM configuration is not available. To create an LVM configuration, press Ctrl + Alt + F2 to use a shell prompt in a different virtual console. You can run vgcreate and lvm commands in this shell. To return to the text-mode installation, press Ctrl + Alt + F1 . Procedure From the Manual Partitioning window, create a new mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Select LVM in the drop-down menu. The Volume Group drop-down menu is displayed with the newly-created volume group name. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information about Kickstart, see the Automatically installing RHEL . Click Done to return to the Installation Summary window. Additional resources Configuring and managing logical volumes 17.4.11. Configuring an LVM logical volume You can configure a newly-created LVM logical volume based on your requirements. Warning Placing the /boot partition on an LVM volume is not supported. Procedure From the Manual Partitioning window, create a mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Click the Device Type drop-down menu and select LVM . The Volume Group drop-down menu is displayed with the newly-created volume group name. Click Modify to configure the newly-created volume group. The Configure Volume Group dialog box opens. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information, see the Automatically installing RHEL document. Optional: From the RAID Level drop-down menu, select the RAID level that you require. The available RAID levels are the same as with actual RAID devices. Select the Encrypt check box to mark the volume group for encryption. From the Size policy drop-down menu, select any of the following size policies for the volume group: The available policy options are: Automatic The size of the volume group is set automatically so that it is large enough to contain the configured logical volumes. This is optimal if you do not need free space within the volume group. As large as possible The volume group is created with maximum size, regardless of the size of the configured logical volumes it contains. This is optimal if you plan to keep most of your data on LVM and later need to increase the size of some existing logical volumes, or if you need to create additional logical volumes within this group. Fixed You can set an exact size of the volume group. Any configured logical volumes must then fit within this fixed size. This is useful if you know exactly how large you need the volume group to be. Click Save to apply the settings and return to the Manual Partitioning window. Click Update Settings to save your changes. Click Done to return to the Installation Summary window. 17.4.12. Advice on partitions There is no best way to partition every system; the optimal setup depends on how you plan to use the system being installed. However, the following tips may help you find the optimal layout for your needs: Create partitions that have specific requirements first, for example, if a particular partition must be on a specific disk. Consider encrypting any partitions and volumes which might contain sensitive data. Encryption prevents unauthorized people from accessing the data on the partitions, even if they have access to the physical storage device. In most cases, you should at least encrypt the /home partition, which contains user data. In some cases, creating separate mount points for directories other than / , /boot and /home may be useful; for example, on a server running a MySQL database, having a separate mount point for /var/lib/mysql allows you to preserve the database during a re-installation without having to restore it from backup afterward. However, having unnecessary separate mount points will make storage administration more difficult. Some special restrictions apply to certain directories with regards to which partitioning layouts can be placed. Notably, the /boot directory must always be on a physical partition (not on an LVM volume). If you are new to Linux, consider reviewing the Linux Filesystem Hierarchy Standard for information about various system directories and their contents. Each kernel requires approximately: 60MiB (initrd 34MiB, 11MiB vmlinuz, and 5MiB System.map) For rescue mode: 100MiB (initrd 76MiB, 11MiB vmlinuz, and 5MiB System map) When kdump is enabled in system it will take approximately another 40MiB (another initrd with 33MiB) The default partition size of 1 GiB for /boot should suffice for most common use cases. However, increase the size of this partition if you are planning on retaining multiple kernel releases or errata kernels. The /var directory holds content for a number of applications, including the Apache web server, and is used by the DNF package manager to temporarily store downloaded package updates. Make sure that the partition or volume containing /var has at least 5 GiB. The /usr directory holds the majority of software on a typical Red Hat Enterprise Linux installation. The partition or volume containing this directory should therefore be at least 5 GiB for minimal installations, and at least 10 GiB for installations with a graphical environment. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories under them. For example, a separate partition for /var/www works without issues. Important Some security policies require the separation of /usr and /var , even though it makes administration more complex. Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other volumes. You can also select the LVM Thin Provisioning device type for the partition to have the unused space handled automatically by the volume. The size of an XFS file system cannot be reduced - if you need to make a partition or volume with this file system smaller, you must back up your data, destroy the file system, and create a new, smaller one in its place. Therefore, if you plan to alter your partitioning layout later, you should use the ext4 file system instead. Use Logical Volume Manager (LVM) if you anticipate expanding your storage by adding more disks or expanding virtual machine disks after the installation. With LVM, you can create physical volumes on the new drives, and then assign them to any volume group and logical volume as you see fit - for example, you can easily expand your system's /home (or any other directory residing on a logical volume). Creating a BIOS Boot partition or an EFI System Partition may be necessary, depending on your system's firmware, boot drive size, and boot drive disk label. Note that you cannot create a BIOS Boot or EFI System Partition in graphical installation if your system does not require one - in that case, they are hidden from the menu. Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 17.5. Selecting the base environment and additional software Use the Software Selection window to select the software packages that you require. The packages are organized by Base Environment and Additional Software. Base Environment contains predefined packages. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom operating system, Virtualization Host. The availability is dependent on the installation ISO image that is used as the installation source. Additional Software for Selected Environment contains additional software packages for the base environment. You can select multiple software packages. Use a predefined environment and additional software to customize your system. However, in a standard installation, you cannot select individual packages to install. To view the packages contained in a specific environment, see the repository /repodata/*-comps- repository . architecture .xml file on your installation source media (DVD, CD, USB). The XML file contains details of the packages installed as part of a base environment. Available environments are marked by the <environment> tag, and additional software packages are marked by the <group> tag. If you are unsure about which packages to install, select the Minimal Install base environment. Minimal install installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. After the system finishes installing and you log in for the first time, you can use the DNF package manager to install additional software. For more information about DNF package manager, see the Configuring basic system settings document. Note Use the dnf group list command from any RHEL 9 system to view the list of packages being installed on the system as a part of software selection. For more information, see Configuring basic system settings . If you need to control which packages are installed, you can use a Kickstart file and define the packages in the %packages section. By default, RHEL 9 does not install the TuneD package. You can manually install the TuneD package using the dnf install tuned command. For more information, see the Automatically installing RHEL document. Prerequisites You have configured the installation source. The installation program has downloaded package metadata. The Installation Summary window is open. Procedure From the Installation Summary window, click Software Selection . The Software Selection window opens. From the Base Environment pane, select a base environment. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom Operating System, Virtualization Host. By default, the Server with GUI base environment is selected. Figure 17.3. Red Hat Enterprise Linux Software Selection Optional: For installations on ARM based systems, select desired Page size from Kernel Options . By default, it selects Kernel with a 4k page size. Warning If you want to use the Kernel with 64k page size, ensure you select Minimal Install under Base Environment to use this option. You can install additional software after you login to the system for the first time post installation using the DNF package manager. From the Additional Software for Selected Environment pane, select one or more options. Click Done to apply the settings and return to graphical installations. Additional resources The 4k and 64k page size Kernel Options 17.6. Optional: Configuring the network and host name Use the Network and Host name window to configure network interfaces. Options that you select here are available both during the installation for tasks such as downloading packages from a remote location, and on the installed system. Follow the steps in this procedure to configure your network and host name. Procedure From the Installation Summary window, click Network and Host Name . From the list in the left-hand pane, select an interface. The details are displayed in the right-hand pane. Toggle the ON/OFF switch to enable or disable the selected interface. You cannot add or remove interfaces manually. Click + to add a virtual network interface, which can be either: Team (deprecated), Bond, Bridge, or VLAN. Click - to remove a virtual interface. Click Configure to change settings such as IP addresses, DNS servers, or routing configuration for an existing interface (both virtual and physical). Type a host name for your system in the Host Name field. The host name can either be a fully qualified domain name (FQDN) in the format hostname.domainname , or a short host name without the domain. Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this system, specify only the short host name. Host names can only contain alphanumeric characters and - or . . Host name should be equal to or less than 64 characters. Host names cannot start or end with - and . . To be compliant with DNS, each part of a FQDN should be equal to or less than 63 characters and the FQDN total length, including dots, should not exceed 255 characters. The value localhost means that no specific static host name for the target system is configured, and the actual host name of the installed system is configured during the processing of the network configuration, for example, by NetworkManager using DHCP or DNS. When using static IP and host name configuration, it depends on the planned system use case whether to use a short name or FQDN. Red Hat Identity Management configures FQDN during provisioning but some 3rd party software products may require a short name. In either case, to ensure availability of both forms in all situations, add an entry for the host in /etc/hosts in the format IP FQDN short-alias . Click Apply to apply the host name to the installer environment. Alternatively, in the Network and Hostname window, you can choose the Wireless option. Click Select network in the right-hand pane to select your wifi connection, enter the password if required, and click Done . Additional resources For more information about network device naming standards, see Configuring and managing networking . 17.6.1. Adding a virtual network interface You can add a virtual network interface. Procedure From the Network & Host name window, click the + button to add a virtual network interface. The Add a device dialog opens. Select one of the four available types of virtual interfaces: Bond : NIC ( Network Interface Controller ) Bonding, a method to bind multiple physical network interfaces together into a single bonded channel. Bridge : Represents NIC Bridging, a method to connect multiple separate networks into one aggregate network. Team : NIC Teaming, a new implementation to aggregate links, designed to provide a small kernel driver to implement the fast handling of packet flows, and various applications to do everything else in user space. NIC teaming is deprecated in Red Hat Enterprise Linux 9. Consider using the network bonding driver as an alternative. For details, see Configuring a network bond . Vlan ( Virtual LAN ): A method to create multiple distinct broadcast domains which are mutually isolated. Select the interface type and click Add . An editing interface dialog box opens, allowing you to edit any available settings for your chosen interface type. For more information, see Editing network interface . Click Save to confirm the virtual interface settings and return to the Network & Host name window. Optional: To change the settings of a virtual interface, select the interface and click Configure . 17.6.2. Editing network interface configuration You can edit the configuration of a typical wired connection used during installation. Configuration of other types of networks is broadly similar, although the specific configuration parameters might be different. Note On 64-bit IBM Z, you cannot add a new connection as the network subchannels need to be grouped and set online beforehand, and this is currently done only in the booting phase. Procedure To configure a network connection manually, select the interface from the Network and Host name window and click Configure . An editing dialog specific to the selected interface opens. The options present depend on the connection type - the available options are slightly different depending on whether the connection type is a physical interface (wired or wireless network interface controller) or a virtual interface (Bond, Bridge, Team (deprecated), or Vlan) that was previously configured in Adding a virtual interface . 17.6.3. Enabling or Disabling the Interface Connection You can enable or disable specific interface connections. Procedure Click the General tab. Select the Connect automatically with priority check box to enable connection by default. Keep the default priority setting at 0 . Optional: Enable or disable all users on the system from connecting to this network by using the All users may connect to this network option. If you disable this option, only root will be able to connect to this network. Important When enabled on a wired connection, the system automatically connects during startup or reboot. On a wireless connection, the interface attempts to connect to any known wireless networks in range. For further information about NetworkManager, including the nm-connection-editor tool, see the Configuring and managing networking document. Click Save to apply the changes and return to the Network and Host name window. It is not possible to only allow a specific user other than root to use this interface, as no other users are created at this point during the installation. If you need a connection for a different user, you must configure it after the installation. 17.6.4. Setting up Static IPv4 or IPv6 Settings By default, both IPv4 and IPv6 are set to automatic configuration depending on current network settings. This means that addresses such as the local IP address, DNS address, and other settings are detected automatically when the interface connects to a network. In many cases, this is sufficient, but you can also provide static configuration in the IPv4 Settings and IPv6 Settings tabs. Complete the following steps to configure IPv4 or IPv6 settings: Procedure To set static network configuration, navigate to one of the IPv Settings tabs and from the Method drop-down menu, select a method other than Automatic , for example, Manual . The Addresses pane is enabled. Optional: In the IPv6 Settings tab, you can also set the method to Ignore to disable IPv6 on this interface. Click Add and enter your address settings. Type the IP addresses in the Additional DNS servers field; it accepts one or more IP addresses of DNS servers, for example, 10.0.0.1,10.0.0.8 . Select the Require IPv X addressing for this connection to complete check box. Selecting this option in the IPv4 Settings or IPv6 Settings tabs allow this connection only if IPv4 or IPv6 was successful. If this option remains disabled for both IPv4 and IPv6, the interface is able to connect if configuration succeeds on either IP protocol. Click Save to apply the changes and return to the Network & Host name window. 17.6.5. Configuring Routes You can control the access of specific connections by configuring routes. Procedure In the IPv4 Settings and IPv6 Settings tabs, click Routes to configure routing settings for a specific IP protocol on an interface. An editing routes dialog specific to the interface opens. Click Add to add a route. Select the Ignore automatically obtained routes check box to configure at least one static route and to disable all routes not specifically configured. Select the Use this connection only for resources on its network check box to prevent the connection from becoming the default route. This option can be selected even if you did not configure any static routes. This route is used only to access certain resources, such as intranet pages that require a local or VPN connection. Another (default) route is used for publicly available resources. Unlike the additional routes configured, this setting is transferred to the installed system. This option is useful only when you configure more than one interface. Click OK to save your settings and return to the editing routes dialog that is specific to the interface. Click Save to apply the settings and return to the Network and Host Name window. 17.7. Optional: Configuring the keyboard layout You can configure the keyboard layout from the Installation Summary screen. Important If you use a layout that cannot accept Latin characters, such as Russian , add the English (United States) layout and configure a keyboard combination to switch between the two layouts. If you select a layout that does not have Latin characters, you might be unable to enter a valid root password and user credentials later in the installation process. This might prevent you from completing the installation. Procedure From the Installation Summary window, click Keyboard . Click + to open the Add a Keyboard Layout window to change to a different layout. Select a layout by browsing the list or use the Search field. Select the required layout and click Add . The new layout appears under the default layout. Click Options to optionally configure a keyboard switch that you can use to cycle between available layouts. The Layout Switching Options window opens. To configure key combinations for switching, select one or more key combinations and click OK to confirm your selection. Optional: When you select a layout, click the Keyboard button to open a new dialog box displaying a visual representation of the selected layout. Click Done to apply the settings and return to graphical installations. 17.8. Optional: Configuring the language support You can change the language settings from the Installation Summary screen. Procedure From the Installation Summary window, click Language Support . The Language Support window opens. The left pane lists the available language groups. If at least one language from a group is configured, a check mark is displayed and the supported language is highlighted. From the left pane, click a group to select additional languages, and from the right pane, select regional options. Repeat this process for all the languages that you want to configure. Optional: Search the language group by typing in the text box, if required. Click Done to apply the settings and return to graphical installations. 17.9. Optional: Configuring the date and time-related settings You can configure the date and time-related settings from the Installation Summary screen. Procedure From the Installation Summary window, click Time & Date . The Time & Date window opens. The list of cities and regions come from the Time Zone Database ( tzdata ) public domain that is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat can not add cities or regions to this database. You can find more information at the IANA official website . From the Region drop-down menu, select a region. Select Etc as your region to configure a time zone relative to Greenwich Mean Time (GMT) without setting your location to a specific region. From the City drop-down menu, select the city, or the city closest to your location in the same time zone. Toggle the Network Time switch to enable or disable network time synchronization using the Network Time Protocol (NTP). Enabling the Network Time switch keeps your system time correct as long as the system can access the internet. By default, one NTP pool is configured. Optional: Use the gear wheel button to the Network Time switch to add a new NTP, or disable or remove the default options. Click Done to apply the settings and return to graphical installations. Optional: Disable the network time synchronization to activate controls at the bottom of the page to set time and date manually. 17.10. Optional: Subscribing the system and activating Red Hat Insights Red Hat Insights is a Software-as-a-Service (SaaS) offering that provides continuous, in-depth analysis of registered Red Hat-based systems to proactively identify threats to security, performance and stability across physical, virtual and cloud environments, and container deployments. By registering your RHEL system in Red Hat Insights, you gain access to predictive analytics, security alerts, and performance optimization tools, enabling you to maintain a secure, efficient, and stable IT environment. You can register to Red Hat by using either your Red Hat account or your activation key details. You can connect your system to Red hat Insights by using the Connect to Red Hat option. Procedure From the Installation Summary screen, under Software , click Connect to Red Hat . Select Account or Activation Key . If you select Account , enter your Red Hat Customer Portal username and password details. If you select Activation Key , enter your organization ID and activation key. You can enter more than one activation key, separated by a comma, as long as the activation keys are registered to your subscription. Select the Set System Purpose check box. If the account has Simple content access mode enabled, setting the system purpose values is still important for accurate reporting of consumption in the subscription services. If your account is in the entitlement mode, system purpose enables the entitlement server to determine and automatically attach the most appropriate subscription to satisfy the intended use of the Red Hat Enterprise Linux 9 system. Select the required Role , SLA , and Usage from the corresponding drop-down lists. The Connect to Red Hat Insights check box is enabled by default. Clear the check box if you do not want to connect to Red Hat Insights. Optional: Expand Options . Select the Use HTTP proxy check box if your network environment only allows external Internet access or access to content servers through an HTTP proxy. Clear the Use HTTP proxy check box if an HTTP proxy is not used. If you are running Satellite Server or performing internal testing, select the Satellite URL and Custom base URL check boxes and enter the required details. Important RHEL 9 is supported only with Satellite 6.11 or later. Check the version prior to registering the system. The Satellite URL field does not require the HTTP protocol, for example nameofhost.com . However, the Custom base URL field requires the HTTP protocol. To change the Custom base URL after registration, you must unregister, provide the new details, and then re-register. Click Register to register the system. When the system is successfully registered and subscriptions are attached, the Connect to Red Hat window displays the attached subscription details. Depending on the amount of subscriptions, the registration and attachment process might take up to a minute to complete. Click Done to return to the Installation Summary window. A Registered message is displayed under Connect to Red Hat . Additional resources About Red Hat Insights 17.11. Optional: Using network-based repositories for the installation You can configure an installation source from either auto-detected installation media, Red Hat CDN, or the network. When the Installation Summary window first opens, the installation program attempts to configure an installation source based on the type of media that was used to boot the system. The full Red Hat Enterprise Linux Server DVD configures the source as local media. Prerequisites You have downloaded the full installation DVD ISO or minimal installation Boot ISO image from the Product Downloads page. You have created bootable installation media. The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Source . The Installation Source window opens. Review the Auto-detected installation media section to verify the details. This option is selected by default if you started the installation program from media containing an installation source, for example, a DVD. Click Verify to check the media integrity. Review the Additional repositories section and note that the AppStream check box is selected by default. The BaseOS and AppStream repositories are installed as part of the full installation image. Do not disable the AppStream repository check box if you want a full Red Hat Enterprise Linux 9 installation. Optional: Select the Red Hat CDN option to register your system, attach RHEL subscriptions, and install RHEL from the Red Hat Content Delivery Network (CDN). Optional: Select the On the network option to download and install packages from a network location instead of local media. This option is available only when a network connection is active. See Configuring network and host name options for information about how to configure network connections in the GUI. Note If you do not want to download and install additional repositories from a network location, proceed to Configuring software selection . Select the On the network drop-down menu to specify the protocol for downloading packages. This setting depends on the server that you want to use. Type the server address (without the protocol) into the address field. If you choose NFS, a second input field opens where you can specify custom NFS mount options . This field accepts options listed in the nfs(5) man page on your system. When selecting an NFS installation source, specify the address with a colon ( : ) character separating the host name from the path. For example, server.example.com:/path/to/directory . The following steps are optional and are only required if you use a proxy for network access. Click Proxy setup to configure a proxy for an HTTP or HTTPS source. Select the Enable HTTP proxy check box and type the URL into the Proxy Host field. Select the Use Authentication check box if the proxy server requires authentication. Type in your user name and password. Click OK to finish the configuration and exit the Proxy Setup... dialog box. Note If your HTTP or HTTPS URL refers to a repository mirror, select the required option from the URL type drop-down list. All environments and additional software packages are available for selection when you finish configuring the sources. Click + to add a repository. Click - to delete a repository. Click the arrow icon to revert the current entries to the setting when you opened the Installation Source window. To activate or deactivate a repository, click the check box in the Enabled column for each entry in the list. You can name and configure your additional repository in the same way as the primary repository on the network. Click Done to apply the settings and return to the Installation Summary window. 17.12. Optional: Configuring Kdump kernel crash-dumping mechanism Kdump is a kernel crash-dumping mechanism. In the event of a system crash, Kdump captures the contents of the system memory at the moment of failure. This captured memory can be analyzed to find the cause of the crash. If Kdump is enabled, it must have a small portion of the system's memory (RAM) reserved to itself. This reserved memory is not accessible to the main kernel. Procedure From the Installation Summary window, click Kdump . The Kdump window opens. Select the Enable kdump check box. Select either the Automatic or Manual memory reservation setting. If you select Manual , enter the amount of memory (in megabytes) that you want to reserve in the Memory to be reserved field using the + and - buttons. The Usable System Memory readout below the reservation input field shows how much memory is accessible to your main system after reserving the amount of RAM that you select. Click Done to apply the settings and return to graphical installations. The amount of memory that you reserve is determined by your system architecture (AMD64 and Intel 64 have different requirements than IBM Power) as well as the total amount of system memory. In most cases, automatic reservation is satisfactory. Additional settings, such as the location where kernel crash dumps will be saved, can only be configured after the installation using either the system-config-kdump graphical interface, or manually in the /etc/kdump.conf configuration file. 17.13. Optional: Selecting a security profile You can apply security policy during your Red Hat Enterprise Linux 9 installation and configure it to use on your system before the first boot. 17.13.1. About security policy The Red Hat Enterprise Linux includes OpenSCAP suite to enable automated configuration of the system in alignment with a particular security policy. The policy is implemented using the Security Content Automation Protocol (SCAP) standard. The packages are available in the AppStream repository. However, by default, the installation and post-installation process does not enforce any policies and therefore does not involve any checks unless specifically configured. Applying a security policy is not a mandatory feature of the installation program. If you apply a security policy to the system, it is installed using restrictions defined in the profile that you selected. The openscap-scanner and scap-security-guide packages are added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. When you select a security policy, the Anaconda GUI installer requires the configuration to adhere to the policy's requirements. There might be conflicting package selections, as well as separate partitions defined. Only after all the requirements are met, you can start the installation. At the end of the installation process, the selected OpenSCAP security policy automatically hardens the system and scans it to verify compliance, saving the scan results to the /root/openscap_data directory on the installed system. By default, the installer uses the content of the scap-security-guide package bundled in the installation image. You can also load external content from an HTTP, HTTPS, or FTP server. 17.13.2. Configuring a security profile You can configure a security policy from the Installation Summary window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Security Profile . The Security Profile window opens. To enable security policies on the system, toggle the Apply security policy switch to ON . Select one of the profiles listed in the top pane. Click Select profile . Profile changes that you must apply before installation appear in the bottom pane. Click Change content to use a custom profile. A separate window opens allowing you to enter a URL for valid security content. Click Fetch to retrieve the URL. You can load custom profiles from an HTTP , HTTPS , or FTP server. Use the full address of the content including the protocol, such as http:// . A network connection must be active before you can load a custom profile. The installation program detects the content type automatically. Click Use SCAP Security Guide to return to the Security Profile window. Click Done to apply the settings and return to the Installation Summary window. 17.13.3. Profiles not compatible with Server with GUI Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. Therefore, do not select Server with GUI when installing systems compliant with one of the following profiles: Table 17.2. Profiles not compatible with Server with GUI Profile name Profile ID Justification Notes [DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. [DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. DISA STIG for Red Hat Enterprise Linux 9 xccdf_org.ssgproject.content_profile_ stig Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. To install a RHEL system as a Server with GUI aligned with DISA STIG, you can use the DISA STIG with GUI profile BZ#1648162 17.13.4. Deploying baseline-compliant RHEL systems using Kickstart You can deploy RHEL systems that are aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Prerequisites The scap-security-guide package is installed on your RHEL 9 system. Procedure Open the /usr/share/scap-security-guide/kickstart/ssg-rhel9-ospp-ks.cfg Kickstart file in an editor of your choice. Update the partitioning scheme to fit your configuration requirements. For OSPP compliance, the separate partitions for /boot , /home , /var , /tmp , /var/log , /var/tmp , and /var/log/audit must be preserved, and you can only change the size of the partitions. Start a Kickstart installation as described in Performing an automated installation using Kickstart . Important Passwords in Kickstart files are not checked for OSPP requirements. Verification To check the current status of the system after installation is complete, reboot the system and start a new scan: Additional resources OSCAP Anaconda Add-on Kickstart commands and options reference: %addon org_fedora_oscap 17.13.5. Additional resources scap-security-guide(8) - The manual page for the scap-security-guide project contains information about SCAP security profiles, including examples on how to utilize the provided benchmarks using the OpenSCAP utility. Red Hat Enterprise Linux security compliance information is available in the Security hardening document.
[ "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/customizing-the-system-in-the-installer_rhel-installer
3.5. Displaying Status
3.5. Displaying Status You can display the status of the cluster and the cluster resources with the following command. If you do not specify a commands parameter, this command displays all information about the cluster and the resources. You display the status of only particular cluster components by specifying resources , groups , cluster , nodes , or pcsd .
[ "pcs status commands" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-pcsstatus-HAAR
Chapter 13. Migrating virtual machine instances between Compute nodes
Chapter 13. Migrating virtual machine instances between Compute nodes Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. You sometimes need to migrate instances from one Compute node to another Compute node in the data plane, to perform maintenance, rebalance the workload, or replace a failed or failing node. Compute node maintenance If you need to temporarily take a Compute node out of service, for instance, to perform hardware maintenance or repair, kernel upgrades and software updates, you can migrate instances running on the Compute node to another Compute node. Failing Compute node If a Compute node is about to fail and you need to service it or replace it, you can migrate instances from the failing Compute node to a healthy Compute node. Failed Compute nodes If a Compute node has already failed, you can evacuate the instances. You can rebuild instances from the original image on another Compute node, using the same name, UUID, network addresses, and any other allocated resources the instance had before the Compute node failed. Workload rebalancing You can migrate one or more instances to another Compute node to rebalance the workload. For example, you can consolidate instances on a Compute node to conserve power, migrate instances to a Compute node that is physically closer to other networked resources to reduce latency, or distribute instances across Compute nodes to avoid hot spots and increase resiliency. All Compute nodes provide secure migration. All Compute nodes also require a shared SSH key to provide the users of each host with access to other Compute nodes during the migration process. 13.1. Migration types Red Hat OpenStack Services on OpenShift (RHOSO) supports the following types of migration. Cold migration Cold migration, or non-live migration, involves shutting down a running instance before migrating it from the source Compute node to the destination Compute node. Cold migration involves some downtime for the instance. The migrated instance maintains access to the same volumes and IP addresses. Note Cold migration requires that both the source and destination Compute nodes are running. Live migration Live migration involves moving the instance from the source Compute node to the destination Compute node without shutting it down, and while maintaining state consistency. Live migrating an instance involves little or no perceptible downtime. However, live migration does impact performance for the duration of the migration operation. Therefore, instances should be taken out of the critical path while being migrated. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwith, memory bandwidth, storage IO, or CPU peformance during live migration. Note Live migration requires that both the source and destination Compute nodes are running. Evacuation If you need to migrate instances because the source Compute node has already failed, you can evacuate the instances. 13.2. Migration constraints Migration constraints typically arise with block migration, configuration disks, or when one or more instances access physical hardware on the Compute node. CPU constraints The source and destination Compute nodes must have the same CPU architecture. For example, Red Hat does not support migrating an instance from a ppc64le CPU to a x86_64 CPU. Migration between different CPU models is not supported. In some cases, the CPU of the source and destination Compute node must match exactly, such as instances that use CPU host passthrough. In all cases, the CPU features of the destination node must be a superset of the CPU features on the source node. Memory constraints The destination Compute node must have sufficient available RAM. Memory oversubscription can cause migration to fail. Block migration constraints Migrating instances that use disks that are stored locally on a Compute node takes significantly longer than migrating volume-backed instances that use shared storage, such as Red Hat Ceph Storage. This latency arises because OpenStack Compute (nova) migrates local disks block-by-block between the Compute nodes over the control plane network by default. By contrast, volume-backed instances that use shared storage, such as Red Hat Ceph Storage, do not have to migrate the volumes, because each Compute node already has access to the shared storage. Note Network congestion in the control plane network caused by migrating local disks or instances that consume large amounts of RAM might impact the performance of other systems that use the control plane network, such as RabbitMQ. Read-only drive migration constraints Migrating a drive is supported only if the drive has both read and write capabilities. For example, OpenStack Compute (nova) cannot migrate a CD-ROM drive or a read-only config drive. However, OpenStack Compute (nova) can migrate a drive with both read and write capabilities. The config drive vfat is the only config drive format that OpenStack Compute (nova) can migrate. Live migration constraints In some cases, live migrating instances involves additional constraints. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwidth, memory bandwidth, storage IO, or CPU performance during live migration. No new operations during migration To achieve state consistency between the copies of the instance on the source and destination nodes, RHOSP must prevent new operations during live migration. Otherwise, live migration might take a long time or potentially never end if writes to memory occur faster than live migration can replicate the state of the memory. CPU pinning with NUMA The NovaSchedulerEnabledFilters parameter in the Compute configuration must include the values AggregateInstanceExtraSpecsFilter and NUMATopologyFilter . Multi-cell clouds In a multi-cell cloud, you can live migrate instances to a different host in the same cell, but not across cells. Floating instances When you migrate shared (floating) instances, the value of the NovaComputeCpuSharedSet field between the destination and source Compute nodes must match, so that the instances are allocated to CPUs configured for shared (unpinned) instances at the destination. Therefore, if you need to live migrate floating instances, ensure that all the Compute nodes have the same CPU mappings for dedicated (pinned) and shared instances, or use a host aggregate for the shared instances. Destination Compute node capacity The destination Compute node must have sufficient capacity to host the instance that you want to migrate. SR-IOV live migration Instances with SR-IOV-based network interfaces can be live migrated. Live migrating instances with direct mode SR-IOV network interfaces incurs network downtime. This is because the direct mode interfaces need to be detached and re-attached during the migration. Live migration on ML2/OVS deployments During the live migration process, when the virtual machine is unpaused in the destination host, the metadata service might not be available because the metadata server proxy has not yet spawned. This unavailability is brief. The service becomes available momentarily and the live migration succeeds. Constraints that preclude live migration You cannot live migrate an instance that uses the following features. PCI passthrough QEMU/KVM hypervisors support attaching PCI devices on the Compute node to an instance. Use PCI passthrough to give an instance exclusive access to PCI devices, which appear and behave as if they are physically attached to the operating system of the instance. However, because PCI passthrough involves direct access to the physical devices, QEMU/KVM does not support live migration of instances using PCI passthrough. 13.3. Preparing to migrate Before you migrate one or more instances, you need to determine the Compute node names and the IDs of the instances to migrate. Prerequisites You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. The oc command line tool is installed on the workstation. Procedure Access the remote shell for the OpenStackClient pod from your workstation: List the instances on the source Compute node and locate the ID of the instance or instances that you want to migrate: Replace <source> with the name or ID of the source Compute node. Optional: If you are migrating instances from a source Compute node to perform maintenance on the node, you must disable the node to prevent the scheduler from assigning new instances to the node during maintenance: Replace <source> with the host name of the source Compute node. Exit the OpenStackClient pod: You are now ready to perform the migration. Follow the required procedure detailed in Cold migrating an instance or Live migrating an instance . 13.4. Cold migrating an instance Cold migrating an instance involves stopping the instance and moving it to another Compute node. Cold migration facilitates migration scenarios that live migrating cannot facilitate, such as migrating instances that use PCI passthrough. The scheduler automatically selects the destination Compute node. For more information, see Migration constraints . Procedure Access the remote shell for the OpenStackClient pod from your workstation: To cold migrate an instance, enter the following command to power off and move the instance: Replace <instance> with the name or ID of the instance to migrate. Specify the --block-migration flag if migrating a locally stored volume. Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance: A status of "VERIFY_RESIZE" indicates you need to confirm or revert the migration: If the migration worked as expected, confirm it: Replace <instance> with the name or ID of the instance to migrate. A status of "ACTIVE" indicates that the instance is ready to use. If the migration did not work as expected, revert it: Replace <instance> with the name or ID of the instance. Restart the instance: Replace <instance> with the name or ID of the instance. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. Exit the OpenStackClient pod: 13.5. Live migrating an instance Live migration moves an instance from a source Compute node to a destination Compute node with a minimal amount of downtime. Live migration might not be appropriate for all instances. For more information, see Migration constraints . Procedure Access the remote shell for the OpenStackClient pod from your workstation: To live migrate an instance, specify the instance and the destination Compute node: Replace <instance> with the name or ID of the instance. Replace <dest> with the name or ID of the destination Compute node. Note The openstack server migrate command covers migrating instances with shared storage, which is the default. Specify the --block-migration flag to migrate a locally stored volume: Confirm that the instance is migrating: Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance to confirm if the migration was successful: Replace <dest> with the name or ID of the destination Compute node. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. Exit the OpenStackClient pod: 13.6. Checking migration status Migration involves several state transitions before migration is complete. During a healthy migration, the migration state typically transitions as follows: Queued: The Compute service has accepted the request to migrate an instance, and migration is pending. Preparing: The Compute service is preparing to migrate the instance. Running: The Compute service is migrating the instance. Post-migrating: The Compute service has built the instance on the destination Compute node and is releasing resources on the source Compute node. Completed: The Compute service has completed migrating the instance and finished releasing resources on the source Compute node. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Retrieve the list of migration IDs for the instance: Replace <instance> with the name or ID of the instance. Show the status of the migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Running the openstack server migration show command returns the following example output: Tip The Compute service measures progress of the migration by the number of remaining memory bytes to copy. If this number does not decrease over time, the migration might be unable to complete, and the Compute service might abort it. Exit the OpenStackClient pod: Sometimes instance migration can take a long time or encounter errors. For more information, see Troubleshooting migration . 13.7. Evacuating an instance If you want to move an instance from a failed or shut-down Compute node to a new host in the same environment, you can evacuate it. The evacuate process destroys the original instance and rebuilds it on another Compute node using the original image, instance name, UUID, network addresses, and any other resources the original instance had allocated to it. If the instance uses shared storage, the instance root disk is not rebuilt during the evacuate process, as the disk remains accessible by the destination Compute node. If the instance does not use shared storage, then the instance root disk is also rebuilt on the destination Compute node. Note You can only perform an evacuation when the Compute node is fenced, and the API reports that the state of the Compute node is "down" or "forced-down". If the Compute node is not reported as "down" or "forced-down", the evacuate command fails. To perform an evacuation, you must be a cloud administrator. 13.7.1. Evacuating an instance To evacuate all instances on a host, you must evacuate them one at a time. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Confirm that the instance is not running: Replace <node> with the name or UUID of the Compute node that hosts the instance. Check the instance task state: Replace <instance> with the name or UUID of the instance that you want to evacuate. Note If the instance task state is not "NONE" the evacuation might fail. Confirm that the host Compute node is fenced or shut down: Replace <node> with the name or UUID of the Compute node that hosts the instance to evacuate. To perform an evacuation, the Compute node must have a status of down or forced-down . Disable the Compute node: Replace <node> with the name of the Compute node to evacuate the instance from. Replace <disable_host_reason> with details about why you disabled the Compute node. Evacuate the instance: Optional: Replace <dest> with the name of the Compute node to evacuate the instance to. If you do not specify the destination Compute node, the Compute scheduler selects one for you. You can find possible Compute nodes by using the following command: Optional: Replace <password> with the administrative password required to access the evacuated instance. If a password is not specified, a random password is generated and output when the evacuation is complete. Note The password is changed only when ephemeral instance disks are stored on the local hypervisor disk. The password is not changed if the instance is hosted on shared storage or has a Block Storage volume attached, and no error message is displayed to inform you that the password was not changed. Replace <instance> with the name or ID of the instance to evacuate. Note If the evacuation fails and the task state of the instance is not "NONE", contact Red Hat Support for help to recover the instance. Optional: Enable the Compute node when it is recovered: Replace <node> with the name of the Compute node to enable. Exit the OpenStackClient pod: 13.8. Troubleshooting migration The following issues can arise during instance migration: The migration process encounters errors. The migration process never ends. Performance of the instance degrades after migration. 13.8.1. Errors during migration The following issues can send the migration operation into an error state: The Compute service is shutting down. A race condition occurs. When live migration enters a failed state, it is typically followed by an error state. The following common issues can cause a failed state: A destination Compute host is not available. A scheduler exception occurs. The rebuild process fails due to insufficient computing resources. A server group check fails. The instance on the source Compute node gets deleted before migration to the destination Compute node is complete. 13.8.2. Never-ending live migration Live migrations are left in a perpetual running state when they fail to complete, which can occur when requests from the guest OS to the instance running on the source Compute node create changes that occur faster than the Compute service can replicate them to the destination Compute node. Use one of the following methods to address this situation: Abort the live migration. Force the live migration to complete. Aborting live migration If the instance state changes faster than the migration procedure can copy it to the destination node, and you do not want to temporarily suspend the instance operations, you can abort the live migration. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Abort the live migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Exit the OpenStackClient pod: Forcing live migration to complete If the instance state changes faster than the migration procedure can copy it to the destination node, and you want to temporarily suspend the instance operations to force migration to complete, you can force the live migration procedure to complete. Important Forcing live migration to complete might lead to perceptible downtime. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Force the live migration to complete: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Exit the OpenStackClient pod:
[ "oc rsh -n openstack openstackclient", "openstack server list --host <source> --all-projects", "openstack compute service set <source> nova-compute --disable", "exit", "oc rsh -n openstack openstackclient", "openstack server migrate <instance> --wait", "openstack server list --all-projects", "openstack server resize --confirm <instance>", "openstack server resize --revert <instance>", "openstack server start <instance>", "openstack compute service set <source> nova-compute --enable", "exit", "oc rsh -n openstack openstackclient", "openstack server migrate <instance> --live-migration [--host <dest>] --wait", "openstack server migrate <instance> --live-migration [--host <dest>] --wait --block-migration", "openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | MIGRATING | | ... | ... | +----------------------+--------------------------------------+", "openstack server list --host <dest> --all-projects", "openstack compute service set <source> nova-compute --enable", "exit", "oc rsh -n openstack openstackclient", "openstack server migration list --server <instance> +----+-------------+----------- (...) | Id | Source Node | Dest Node | (...) +----+-------------+-----------+ (...) | 2 | - | - | (...) +----+-------------+-----------+ (...)", "openstack server migration show <instance> <migration_id>", "+------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | created_at | 2017-03-08T02:53:06.000000 | | dest_compute | controller | | dest_host | - | | dest_node | - | | disk_processed_bytes | 0 | | disk_remaining_bytes | 0 | | disk_total_bytes | 0 | | id | 2 | | memory_processed_bytes | 65502513 | | memory_remaining_bytes | 786427904 | | memory_total_bytes | 1091379200 | | server_uuid | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | source_compute | compute2 | | source_node | - | | status | running | | updated_at | 2017-03-08T02:53:47.000000 | +------------------------+--------------------------------------+", "exit", "oc rsh -n openstack openstackclient", "openstack server list --host <node> --all-projects", "openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | NONE | | ... | ... | +----------------------+--------------------------------------+", "openstack baremetal node show <node>", "openstack compute service set <node> nova-compute --disable --disable-reason <disable_host_reason>", "openstack server evacuate [--host <dest>] [--password <password>] <instance>", "openstack hypervisor list", "openstack compute service set <node> nova-compute --enable", "exit", "oc rsh -n openstack openstackclient", "openstack server migration list --server <instance>", "openstack server migration abort <instance> <migration_id>", "exit", "oc rsh -n openstack openstackclient", "openstack server migration list --server <instance>", "openstack server migration force complete <instance> <migration_id>", "exit" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_migrating-virtual-machine-instances-between-compute-nodes_migrating-instances
5.2. Network Interfaces
5.2. Network Interfaces 5.2.1. Adding a New Network Interface You can add multiple network interfaces to virtual machines. Doing so allows you to put your virtual machine on multiple logical networks. Note You can create an overlay network for your virtual machines, isolated from the hosts, by defining a logical network that is not attached to the physical interfaces of the host. For example, you can create a DMZ environment, in which the virtual machines communicate among themselves over the bridge created in the host. The overlay network uses OVN, which must be installed as an external network provider. See the Administration Guide for more information Procedure Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Network Interfaces tab. Click New . Enter the Name of the network interface. Select the Profile and the Type of network interface from the drop-down lists. The Profile and Type drop-down lists are populated in accordance with the profiles and network types available to the cluster and the network interface cards available to the virtual machine. Select the Custom MAC address check box and enter a MAC address for the network interface card as required. Click OK . The new network interface is listed in the Network Interfaces tab in the details view of the virtual machine. The Link State is set to Up by default when the network interface card is defined on the virtual machine and connected to the network. For more details on the fields in the New Network Interface window, see Virtual Machine Network Interface dialogue entries . 5.2.2. Editing a Network Interface In order to change any network settings, you must edit the network interface. This procedure can be performed on virtual machines that are running, but some actions can be performed only on virtual machines that are not running. Editing Network Interfaces Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Network Interfaces tab and select the network interface to edit. Click Edit . Change settings as required. You can specify the Name , Profile , Type , and Custom MAC address . See Adding a Network Interface . Click OK . 5.2.3. Hot Plugging a Network Interface You can hot plug network interfaces. Hot plugging means enabling and disabling devices while a virtual machine is running. Note The guest operating system must support hot plugging network interfaces. Hot Plugging Network Interfaces Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Network Interfaces tab and select the network interface to hot plug. Click Edit . Set the Card Status to Plugged to enable the network interface, or set it to Unplugged to disable the network interface. Click OK . 5.2.4. Removing a Network Interface Removing Network Interfaces Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Network Interfaces tab and select the network interface to remove. Click Remove . Click OK . 5.2.5. Configuring a virtual machine to ignore NICs You can configure the ovirt-guest-agent on a virtual machine to ignore certain NICs. This prevents IP addresses associated with network interfaces created by certain software from appearing in reports. You must specify the name and number of the network interface you want to ignore (for example, eth0 , docker0 ). Procedure In the /etc/ovirt-guest-agent.conf configuration file on the virtual machine, insert the following line, with the NICs to be ignored separated by spaces: ignored_nics = first_NIC_to_ignore second_NIC_to_ignore Start the agent: # systemctl start ovirt-guest-agent Note Some virtual machine operating systems automatically start the guest agent during installation. If your virtual machine's operating system automatically starts the guest agent or if you need to configure the denylist on many virtual machines, use the configured virtual machine as a template for creating additional virtual machines. See Creating a template from an existing virtual machine for details.
[ "ignored_nics = first_NIC_to_ignore second_NIC_to_ignore", "systemctl start ovirt-guest-agent" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-Network_Interfaces
21.4. Exposing Automount Maps to NIS Clients
21.4. Exposing Automount Maps to NIS Clients If any automount maps are already defined, you must manually add them to the NIS configuration in IdM. This ensures the maps are exposed to NIS clients. The NIS server is managed by a special plug-in entry in the IdM LDAP directory. Each NIS domain and map used by the NIS server is added as a sub-entry in this container. The NIS domain entry contains: the name of the NIS domain the name of the NIS map information on how to find the directory entries to use as the NIS map's contents information on which attributes to use as the NIS map's key and value Most of these settings are the same for every map. 21.4.1. Adding an Automount Map IdM stores the automount maps, grouped by the automount location, in the cn=automount branch of the IdM directory tree. You can add the NIS domain and maps using the LDAP protocol. For example, to add an automount map named auto.example in the default location for the example.com domain: Note Set the nis-domain attribute to the name of your NIS domain. The value set in the nis-base attribute must correspond: To an existing automount map set using the ipa automountmap-* commands. To an existing automount location set using the ipa automountlocation-* commands. After you set the entry, you can verify the automount map:
[ "ldapadd -h server.example.com -x -D \"cn=Directory Manager\" -W dn: nis-domain=example.com+nis-map=auto.example,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: example.com nis-map: auto.example nis-filter: (objectclass=automount) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} nis-base: automountmapname=auto.example,cn=default,cn=automount,dc=example,dc=com", "ypcat -k -d example.com -h server.example.com auto.example" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/Exposing_Automount_Maps_to_NIS_Clients
Chapter 2. Crush admin overview
Chapter 2. Crush admin overview The Controlled Replication Under Scalable Hashing (CRUSH) algorithm determines how to store and retrieve data by computing data storage locations. Any sufficiently advanced technology is indistinguishable from magic. -- Arthur C. Clarke 2.1. Crush introduction The CRUSH map for your storage cluster describes your device locations within CRUSH hierarchies and a rule for each hierarchy that determines how Ceph stores data. The CRUSH map contains at least one hierarchy of nodes and leaves. The nodes of a hierarchy, called "buckets" in Ceph, are any aggregation of storage locations as defined by their type. For example, rows, racks, chassis, hosts, and devices. Each leaf of the hierarchy consists essentially of one of the storage devices in the list of storage devices. A leaf is always contained in one node or "bucket." A CRUSH map also has a list of rules that determine how CRUSH stores and retrieves data. Note Storage devices are added to the CRUSH map when adding an OSD to the cluster. The CRUSH algorithm distributes data objects among storage devices according to a per-device weight value, approximating a uniform probability distribution. CRUSH distributes objects and their replicas or erasure-coding chunks according to the hierarchical cluster map an administrator defines. The CRUSH map represents the available storage devices and the logical buckets that contain them for the rule, and by extension each pool that uses the rule. To map placement groups to OSDs across failure domains or performance domains, a CRUSH map defines a hierarchical list of bucket types; that is, under types in the generated CRUSH map. The purpose of creating a bucket hierarchy is to segregate the leaf nodes by their failure domains or performance domains or both. Failure domains include hosts, chassis, racks, power distribution units, pods, rows, rooms, and data centers. Performance domains include failure domains and OSDs of a particular configuration. For example, SSDs, SAS drives with SSD journals, SATA drives, and so on. Devices have the notion of a class , such as hdd , ssd and nvme to more rapidly build CRUSH hierarchies with a class of devices. With the exception of the leaf nodes representing OSDs, the rest of the hierarchy is arbitrary, and you can define it according to your own needs if the default types do not suit your requirements. We recommend adapting your CRUSH map bucket types to your organization's hardware naming conventions and using instance names that reflect the physical hardware names. Your naming practice can make it easier to administer the cluster and troubleshoot problems when an OSD or other hardware malfunctions and the administrator needs remote or physical access to the host or other hardware. In the following example, the bucket hierarchy has four leaf buckets ( osd 1-4 ), two node buckets ( host 1-2 ) and one rack node ( rack 1 ). Since leaf nodes reflect storage devices declared under the devices list at the beginning of the CRUSH map, there is no need to declare them as bucket instances. The second lowest bucket type in the hierarchy usually aggregates the devices; that is, it is usually the computer containing the storage media, and uses whatever term administrators prefer to describe it, such as "node", "computer", "server," "host", "machine", and so on. In high density environments, it is increasingly common to see multiple hosts/nodes per card and per chassis. Make sure to account for card and chassis failure too, for example, the need to pull a card or chassis if a node fails can result in bringing down numerous hosts/nodes and their OSDs. When declaring a bucket instance, specify its type, give it a unique name as a string, assign it an optional unique ID expressed as a negative integer, specify a weight relative to the total capacity or capability of its items, specify the bucket algorithm such as straw2 , and the hash that is usually 0 reflecting hash algorithm rjenkins1 . A bucket can have one or more items. The items can consist of node buckets or leaves. Items can have a weight that reflects the relative weight of the item. 2.1.1. Dynamic data placement Ceph Clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph Clients: By distributing CRUSH maps to Ceph clients, CRUSH empowers Ceph clients to communicate with OSDs directly. This means that Ceph clients avoid a centralized object look-up table that could act as a single point of failure, a performance bottleneck, a connection limitation at a centralized look-up server and a physical limit to the storage cluster's scalability. Ceph OSDs: By distributing CRUSH maps to Ceph OSDs, Ceph empowers OSDs to handle replication, backfilling and recovery. This means that the Ceph OSDs handle storage of object replicas (or coding chunks) on behalf of the Ceph client. It also means that Ceph OSDs know enough about the cluster to re-balance the cluster (backfilling) and recover from failures dynamically. 2.1.2. CRUSH failure domain Having multiple object replicas or M erasure coding chunks helps prevent data loss, but it is not sufficient to address high availability. By reflecting the underlying physical organization of the Ceph Storage Cluster, CRUSH can model-and thereby address-potential sources of correlated device failures. By encoding the cluster's topology into the cluster map, CRUSH placement policies can separate object replicas or erasure coding chunks across different failure domains while still maintaining the desired pseudo-random distribution. For example, to address the possibility of concurrent failures, it might be desirable to ensure that data replicas or erasure coding chunks are on devices using different shelves, racks, power supplies, controllers or physical locations. This helps to prevent data loss and allows the cluster to operate in a degraded state. 2.1.3. CRUSH performance domain Ceph can support multiple hierarchies to separate one type of hardware performance profile from another type of hardware performance profile. For example, CRUSH can create one hierarchy for hard disk drives and another hierarchy for SSDs. Performance domains- hierarchies that take the performance profile of the underlying hardware into consideration- are increasingly popular due to the need to support different performance characteristics. Operationally, these are just CRUSH maps with more than one root type bucket. Use case examples include: Object Storage: Ceph hosts that serve as an object storage back end for S3 and Swift interfaces might take advantage of less expensive storage media such as SATA drives that might not be suitable for VMs- reducing the cost per gigabyte for object storage, while separating more economical storage hosts from more performing ones intended for storing volumes and images on cloud platforms. HTTP tends to be the bottleneck in object storage systems. Cold Storage : Systems designed for cold storage- infrequently accessed data, or data retrieval with relaxed performance requirements- might take advantage of less expensive storage media and erasure coding. However, erasure coding might require a bit of additional RAM and CPU, and thus differ in RAM and CPU requirements from a host used for object storage or VMs. SSD-backed Pools: SSDs are expensive, but they provide significant advantages over hard disk drives. SSDs have no seek time and they provide high total throughput. In addition to using SSDs for journaling, a cluster can support SSD-backed pools. Common use cases include high performance SSD pools. For example, it is possible to map the .rgw.buckets.index pool for the Ceph Object Gateway to SSDs instead of SATA drives. A CRUSH map supports the notion of a device class . Ceph can discover aspects of a storage device and automatically assign a class such as hdd , ssd or nvme . However, CRUSH is not limited to these defaults. For example, CRUSH hierarchies might also be used to separate different types of workloads. For example, an SSD might be used for a journal or write-ahead log, a bucket index or for raw object storage. CRUSH can support different device classes, such as ssd-bucket-index or ssd-object-storage so Ceph does not use the same storage media for different workloads- making performance more predictable and consistent. Behind the scenes, Ceph generates a CRUSH root for each device-class. These roots should only be modified by setting or changing device classes on OSDs. You can view the generated roots using the following command: Example 2.2. CRUSH hierarchy The CRUSH map is a directed acyclic graph, so it can accommodate multiple hierarchies, for example, performance domains. The easiest way to create and modify a CRUSH hierarchy is with the Ceph CLI; however, you can also decompile a CRUSH map, edit it, recompile it, and activate it. When declaring a bucket instance with the Ceph CLI, you must specify its type and give it a unique string name. Ceph automatically assigns a bucket ID, sets the algorithm to straw2 , sets the hash to 0 reflecting rjenkins1 and sets a weight. When modifying a decompiled CRUSH map, assign the bucket a unique ID expressed as a negative integer (optional), specify a weight relative to the total capacity/capability of its item(s), specify the bucket algorithm (usually straw2 ), and the hash (usually 0 , reflecting hash algorithm rjenkins1 ). A bucket can have one or more items. The items can consist of node buckets (for example, racks, rows, hosts) or leaves (for example, an OSD disk). Items can have a weight that reflects the relative weight of the item. When modifying a decompiled CRUSH map, you can declare a node bucket with the following syntax: For example, using the diagram above, we would define two host buckets and one rack bucket. The OSDs are declared as items within the host buckets: Note In the foregoing example, note that the rack bucket does not contain any OSDs. Rather it contains lower level host buckets, and includes the sum total of their weight in the item entry. 2.2.1. CRUSH location A CRUSH location is the position of an OSD in terms of the CRUSH map's hierarchy. When you express a CRUSH location on the command line interface, a CRUSH location specifier takes the form of a list of name/value pairs describing the OSD's position. For example, if an OSD is in a particular row, rack, chassis and host, and is part of the default CRUSH tree, its CRUSH location could be described as: Note: The order of the keys does not matter. The key name (left of = ) must be a valid CRUSH type . By default these include root , datacenter , room , row , pod , pdu , rack , chassis and host . You might edit the CRUSH map to change the types to suit your needs. You do not need to specify all the buckets/keys. For example, by default, Ceph automatically sets a ceph-osd daemon's location to be root=default host={HOSTNAME} (based on the output from hostname -s ). 2.2.2. Adding a bucket To add a bucket instance to your CRUSH hierarchy, specify the bucket name and its type. Bucket names must be unique in the CRUSH map. If you plan to use multiple hierarchies, for example, for different hardware performance profiles, consider naming buckets based on their type of hardware or use case. For example, you could create a hierarchy for solid state drives ( ssd ), a hierarchy for SAS disks with SSD journals ( hdd-journal ), and another hierarchy for SATA drives ( hdd ): The Ceph CLI outputs: Important Using colons (:) in bucket names is not supported. Add an instance of each bucket type you need for your hierarchy. The following example demonstrates adding buckets for a row with a rack of SSD hosts and a rack of hosts for object storage. Once you have completed these steps, view your tree. Notice that the hierarchy remains flat. You must move your buckets into a hierarchical position after you add them to the CRUSH map. 2.2.3. Moving a bucket When you create your initial cluster, Ceph has a default CRUSH map with a root bucket named default and your initial OSD hosts appear under the default bucket. When you add a bucket instance to your CRUSH map, it appears in the CRUSH hierarchy, but it does not necessarily appear under a particular bucket. To move a bucket instance to a particular location in your CRUSH hierarchy, specify the bucket name and its type. For example: Once you have completed these steps, you can view your tree. Note You can also use ceph osd crush create-or-move to create a location while moving an OSD. 2.2.4. Removing a bucket To remove a bucket instance from your CRUSH hierarchy, specify the bucket name. For example: Or: Note The bucket must be empty in order to remove it. If you are removing higher level buckets (for example, a root like default ), check to see if a pool uses a CRUSH rule that selects that bucket. If so, you need to modify your CRUSH rules; otherwise, peering fails. 2.2.5. CRUSH Bucket algorithms When you create buckets using the Ceph CLI, Ceph sets the algorithm to straw2 by default. Ceph supports four bucket algorithms, each representing a tradeoff between performance and reorganization efficiency. If you are unsure of which bucket type to use, we recommend using a straw2 bucket. The bucket algorithms are: Uniform: Uniform buckets aggregate devices with exactly the same weight. For example, when firms commission or decommission hardware, they typically do so with many machines that have exactly the same physical configuration (for example, bulk purchases). When storage devices have exactly the same weight, you can use the uniform bucket type, which allows CRUSH to map replicas into uniform buckets in constant time. With non-uniform weights, you should use another bucket algorithm. List : List buckets aggregate their content as linked lists. Based on the RUSH (Replication Under Scalable Hashing) P algorithm, a list is a natural and intuitive choice for an expanding cluster : either an object is relocated to the newest device with some appropriate probability, or it remains on the older devices as before. The result is optimal data migration when items are added to the bucket. Items removed from the middle or tail of the list, however, can result in a significant amount of unnecessary movement, making list buckets most suitable for circumstances in which they never, or very rarely shrink . Tree : Tree buckets use a binary search tree. They are more efficient than listing buckets when a bucket contains a larger set of items. Based on the RUSH (Replication Under Scalable Hashing) R algorithm, tree buckets reduce the placement time to zero (log n ), making them suitable for managing much larger sets of devices or nested buckets. Straw2 (default): List and Tree buckets use a divide and conquer strategy in a way that either gives certain items precedence, for example, those at the beginning of a list or obviates the need to consider entire subtrees of items at all. That improves the performance of the replica placement process, but can also introduce suboptimal reorganization behavior when the contents of a bucket change due an addition, removal, or re-weighting of an item. The straw2 bucket type allows all items to fairly "compete" against each other for replica placement through a process analogous to a draw of straws. 2.3. Ceph OSDs in CRUSH Once you have a CRUSH hierarchy for the OSDs, add OSDs to the CRUSH hierarchy. You can also move or remove OSDs from an existing hierarchy. The Ceph CLI usage has the following values: id Description The numeric ID of the OSD. Type Integer Required Yes Example 0 name Description The full name of the OSD. Type String Required Yes Example osd.0 weight Description The CRUSH weight for the OSD. Type Double Required Yes Example 2.0 root Description The name of the root bucket of the hierarchy or tree in which the OSD resides. Type Key-value pair. Required Yes Example root=default , root=replicated_rule , and so on bucket-type Description One or more name-value pairs, where the name is the bucket type and the value is the bucket's name. You can specify a CRUSH location for an OSD in the CRUSH hierarchy. Type Key-value pairs. Required No Example datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1 2.3.1. Viewing OSDs in CRUSH The ceph osd crush tree command prints CRUSH buckets and items in a tree view. Use this command to determine a list of OSDs in a particular bucket. It will print output similar to ceph osd tree . To return additional details, execute the following: The command returns an output similar to the following: 2.3.2. Adding an OSD to CRUSH Adding a Ceph OSD to a CRUSH hierarchy is the final step before you might start an OSD (rendering it up and in ) and Ceph assigns placement groups to the OSD. You must prepare a Ceph OSD before you add it to the CRUSH hierarchy. Deployment utilities, such as the Ceph Orchestrator, can perform this step for you. For example creating a Ceph OSD on a single node: Syntax The CRUSH hierarchy is notional, so the ceph osd crush add command allows you to add OSDs to the CRUSH hierarchy wherever you wish. The location you specify should reflect its actual location. If you specify at least one bucket, the command places the OSD into the most specific bucket you specify, and it moves that bucket underneath any other buckets you specify. To add an OSD to a CRUSH hierarchy: Syntax Important If you specify only the root bucket, the command attaches the OSD directly to the root. However, CRUSH rules expect OSDs to be inside of hosts or chassis, and host or chassis should be inside of other buckets reflecting your cluster topology. The following example adds osd.0 to the hierarchy: Note You can also use ceph osd crush set or ceph osd crush create-or-move to add an OSD to the CRUSH hierarchy. 2.3.3. Moving an OSD within a CRUSH Hierarchy If the storage cluster topology changes, you can move an OSD in the CRUSH hierarchy to reflect its actual location. Important Moving an OSD in the CRUSH hierarchy means that Ceph will recompute which placement groups get assigned to the OSD, potentially resulting in significant redistribution of data. To move an OSD within the CRUSH hierarchy: Syntax Note You can also use ceph osd crush create-or-move to move an OSD within the CRUSH hierarchy. 2.3.4. Removing an OSD from a CRUSH Hierarchy Removing an OSD from a CRUSH hierarchy is the first step when you want to remove an OSD from your cluster. When you remove the OSD from the CRUSH map, CRUSH recomputes which OSDs get the placement groups and data re-balances accordingly. See Adding/Removing OSDs for additional details. To remove an OSD from the CRUSH map of a running cluster, execute the following: Syntax 2.4. Device class Ceph's CRUSH map provides extraordinary flexibility in controlling data placement. This is one of Ceph's greatest strengths. Early Ceph deployments used hard disk drives almost exclusively. Today, Ceph clusters are frequently built with multiple types of storage devices: HDD, SSD, NVMe, or even various classes of the foregoing. For example, it is common in Ceph Object Gateway deployments to have storage policies where clients can store data on slower HDDs and other storage policies for storing data on fast SSDs. Ceph Object Gateway deployments might even have a pool backed by fast SSDs for bucket indices. Additionally, OSD nodes also frequently have SSDs used exclusively for journals or write-ahead logs that do NOT appear in the CRUSH map. These complex hardware scenarios historically required manually editing the CRUSH map, which can be time-consuming and tedious. It is not required to have different CRUSH hierarchies for different classes of storage devices. CRUSH rules work in terms of the CRUSH hierarchy. However, if different classes of storage devices reside in the same hosts, the process becomes more complicated- requiring users to create multiple CRUSH hierarchies for each class of device, and then disable the osd crush update on start option that automates much of the CRUSH hierarchy management. Device classes eliminate this tediousness by telling the CRUSH rule what class of device to use, dramatically simplifying CRUSH management tasks. Note The ceph osd tree command has a column reflecting a device class. 2.4.1. Setting a device class To set a device class for an OSD, execute the following: Syntax Example Note Ceph might assign a class to a device automatically. However, class names are simply arbitrary strings. There is no requirement to adhere to hdd , ssd or nvme . In the foregoing example, a device class named bucket-index might indicate an SSD device that a Ceph Object Gateway pool uses exclusively bucket index workloads. To change a device class that was already set, use ceph osd crush rm-device-class first. 2.4.2. Removing a device class To remove a device class for an OSD, execute the following: Syntax Example 2.4.3. Renaming a device class To rename a device class for all OSDs that use that class, execute the following: Syntax Example 2.4.4. Listing a device class To list device classes in the CRUSH map, execute the following: Syntax The output will look something like this: Example 2.4.5. Listing OSDs of a device class To list all OSDs that belong to a particular class, execute the following: Syntax Example The output is simply a list of OSD numbers. For example: 2.4.6. Listing CRUSH Rules by Class To list all CRUSH rules that reference the same class, execute the following: Syntax Example 2.5. CRUSH weights The CRUSH algorithm assigns a weight value in terabytes (by convention) per OSD device with the objective of approximating a uniform probability distribution for write requests that assign new data objects to PGs and PGs to OSDs. For this reason, as a best practice, we recommend creating CRUSH hierarchies with devices of the same type and size, and assigning the same weight. We also recommend using devices with the same I/O and throughput characteristics so that you will also have uniform performance characteristics in your CRUSH hierarchy, even though performance characteristics do not affect data distribution. Since using uniform hardware is not always practical, you might incorporate OSD devices of different sizes and use a relative weight so that Ceph will distribute more data to larger devices and less data to smaller devices. 2.5.1. Setting CRUSH weights of OSDs To set an OSD CRUSH weight in Terabytes within the CRUSH map, execute the following command Where: name Description The full name of the OSD. Type String Required Yes Example osd.0 weight Description The CRUSH weight for the OSD. This should be the size of the OSD in Terabytes, where 1.0 is 1 Terabyte. Type Double Required Yes Example 2.0 This setting is used when creating an OSD or adjusting the CRUSH weight immediately after adding the OSD. It usually does not change over the life of the OSD. 2.5.2. Setting a Bucket's OSD Weights Using ceph osd crush reweight can be time-consuming. You can set (or reset) all Ceph OSD weights under a bucket (row, rack, node, and so on) by executing: Syntax Where, name is the name of the CRUSH bucket. 2.5.3. Set an OSD's in Weight For the purposes of ceph osd in and ceph osd out , an OSD is either in the cluster or out of the cluster. That is how a monitor records an OSD's status. However, even though an OSD is in the cluster, it might be experiencing a malfunction such that you do not want to rely on it as much until you fix it (for example, replace a storage drive, change out a controller, and so on). You can increase or decrease the in weight of a particular OSD (that is, without changing its weight in Terabytes) by executing: Syntax Where: id is the OSD number. weight is a range from 0.0-1.0, where 0 is not in the cluster (that is, it does not have any PGs assigned to it) and 1.0 is in the cluster (that is, the OSD receives the same number of PGs as other OSDs). 2.5.4. Setting the OSDs weight by utilization CRUSH is designed to approximate a uniform probability distribution for write requests that assign new data objects PGs and PGs to OSDs. However, a cluster might become imbalanced anyway. This can happen for a number of reasons. For example: Multiple Pools: You can assign multiple pools to a CRUSH hierarchy, but the pools might have different numbers of placement groups, size (number of replicas to store), and object size characteristics. Custom Clients: Ceph clients such as block device, object gateway and filesystem share data from their clients and stripe the data as objects across the cluster as uniform-sized smaller RADOS objects. So except for the foregoing scenario, CRUSH usually achieves its goal. However, there is another case where a cluster can become imbalanced: namely, using librados to store data without normalizing the size of objects. This scenario can lead to imbalanced clusters (for example, storing 100 1 MB objects and 10 4 MB objects will make a few OSDs have more data than the others). Probability: A uniform distribution will result in some OSDs with more PGs and some with less. For clusters with a large number of OSDs, the statistical outliers will be further out. You can reweight OSDs by utilization by executing the following: Syntax Example Where: threshold is a percentage of utilization such that OSDs facing higher data storage loads will receive a lower weight and thus fewer PGs assigned to them. The default value is 120 , reflecting 120%. Any value from 100+ is a valid threshold. Optional. weight_change_amount is the amount to change the weight. Valid values are greater than 0.0 - 1.0 . The default value is 0.05 . Optional. number_of_OSDs is the maximum number of OSDs to reweight. For large clusters, limiting the number of OSDs to reweight prevents significant rebalancing. Optional. no-increasing is off by default. Increasing the osd weight is allowed when using the reweight-by-utilization or test-reweight-by-utilization commands. If this option is used with these commands, it prevents the OSD weight from increasing, even if the OSD is underutilized. Optional. Important Executing reweight-by-utilization is recommended and somewhat inevitable for large clusters. Utilization rates might change over time, and as your cluster size or hardware changes, the weightings might need to be updated to reflect changing utilization. If you elect to reweight by utilization, you might need to re-run this command as utilization, hardware or cluster size change. Executing this or other weight commands that assign a weight will override the weight assigned by this command (for example, osd reweight-by-utilization , osd crush weight , osd weight , in or out ). 2.5.5. Setting an OSD's Weight by PG distribution In CRUSH hierarchies with a smaller number of OSDs, it's possible for some OSDs to get more PGs than other OSDs, resulting in a higher load. You can reweight OSDs by PG distribution to address this situation by executing the following: Syntax Where: poolname is the name of the pool. Ceph will examine how the pool assigns PGs to OSDs and reweight the OSDs according to this pool's PG distribution. Note that multiple pools could be assigned to the same CRUSH hierarchy. Reweighting OSDs according to one pool's distribution could have unintended effects for other pools assigned to the same CRUSH hierarchy if they do not have the same size (number of replicas) and PGs. 2.5.6. Recalculating a CRUSH Tree's weights CRUSH tree buckets should be the sum of their leaf weights. If you manually edit the CRUSH map weights, you should execute the following to ensure that the CRUSH bucket tree accurately reflects the sum of the leaf OSDs under the bucket. Syntax 2.6. Primary affinity When a Ceph Client reads or writes data, it always contacts the primary OSD in the acting set. For set [2, 3, 4] , osd.2 is the primary. Sometimes an OSD is not well suited to act as a primary compared to other OSDs (for example, it has a slow disk or a slow controller). To prevent performance bottlenecks (especially on read operations) while maximizing utilization of your hardware, you can set a Ceph OSD's primary affinity so that CRUSH is less likely to use the OSD as a primary in an acting set. : Syntax Primary affinity is 1 by default ( that is, an OSD might act as a primary). You might set the OSD primary range from 0-1 , where 0 means that the OSD might NOT be used as a primary and 1 means that an OSD might be used as a primary. When the weight is < 1 , it is less likely that CRUSH will select the Ceph OSD Daemon to act as a primary. 2.7. CRUSH rules CRUSH rules define how a Ceph client selects buckets and the primary OSD within them to store objects, and how the primary OSD selects buckets and the secondary OSDs to store replicas or coding chunks. For example, you might create a rule that selects a pair of target OSDs backed by SSDs for two object replicas, and another rule that selects three target OSDs backed by SAS drives in different data centers for three replicas. A rule takes the following form: id Description A unique whole number for identifying the rule. Purpose A component of the rule mask. Type Integer Required Yes Default 0 type Description Describes a rule for either a storage drive replicated or erasure coded. Purpose A component of the rule mask. Type String Required Yes Default replicated Valid Values Currently only replicated min_size Description If a pool makes fewer replicas than this number, CRUSH will not select this rule. Type Integer Purpose A component of the rule mask. Required Yes Default 1 max_size Description If a pool makes more replicas than this number, CRUSH will not select this rule. Type Integer Purpose A component of the rule mask. Required Yes Default 10 step take <bucket-name> [class <class-name>] Description Takes a bucket name, and begins iterating down the tree. Purpose A component of the rule. Required Yes Example step take data step take data class ssd step choose firstn <num> type <bucket-type> Description Selects the number of buckets of the given type. The number is usually the number of replicas in the pool (that is, pool size). If <num> == 0 , choose pool-num-replicas buckets (all available). If <num> > 0 && < pool-num-replicas , choose that many buckets. If <num> < 0 , it means pool-num-replicas - {num} . Purpose A component of the rule. Prerequisite Follow step take or step choose . Example step choose firstn 1 type row step chooseleaf firstn <num> type <bucket-type> Description Selects a set of buckets of {bucket-type} and chooses a leaf node from the subtree of each bucket in the set of buckets. The number of buckets in the set is usually the number of replicas in the pool (that is, pool size). If <num> == 0 , choose pool-num-replicas buckets (all available). If <num> > 0 && < pool-num-replicas , choose that many buckets. If <num> < 0 , it means pool-num-replicas - <num> . Purpose A component of the rule. Usage removes the need to select a device using two steps. Prerequisite Follows step take or step choose . Example step chooseleaf firstn 0 type row step emit Description Outputs the current value and empties the stack. Typically used at the end of a rule, but might also be used to pick from different trees in the same rule. Purpose A component of the rule. Prerequisite Follows step choose . Example step emit firstn versus indep Description Controls the replacement strategy CRUSH uses when OSDs are marked down in the CRUSH map. If this rule is to be used with replicated pools it should be firstn and if it is for erasure-coded pools it should be indep . Example You have a PG stored on OSDs 1, 2, 3, 4, 5 in which 3 goes down.. In the first scenario, with the firstn mode, CRUSH adjusts its calculation to select 1 and 2, then selects 3 but discovers it is down, so it retries and selects 4 and 5, and then goes on to select a new OSD 6. The final CRUSH mapping change is from 1, 2, 3, 4, 5 to 1, 2, 4, 5, 6 . In the second scenario, with indep mode on an erasure-coded pool, CRUSH attempts to select the failed OSD 3, tries again and picks out 6, for a final transformation from 1, 2, 3, 4, 5 to 1, 2, 6, 4, 5 . Important A given CRUSH rule can be assigned to multiple pools, but it is not possible for a single pool to have multiple CRUSH rules. 2.7.1. Listing CRUSH rules To list CRUSH rules from the command line, execute the following: Syntax 2.7.2. Dumping CRUSH rules To dump the contents of a specific CRUSH rule, execute the following: Syntax 2.7.3. Adding CRUSH rules To add a CRUSH rule, you must specify a rule name, the root node of the hierarchy you wish to use, the type of bucket you want to replicate across (for example, 'rack', 'row', and so on and the mode for choosing the bucket. Syntax Ceph creates a rule with chooseleaf and one bucket of the type you specify. Example Create the following rule: 2.7.4. Creating CRUSH rules for replicated pools To create a CRUSH rule for a replicated pool, execute the following: Syntax Where: <name> : The name of the rule. <root> : The root of the CRUSH hierarchy. <failure-domain> : The failure domain. For example: host or rack . <class> : The storage device class. For example: hdd or ssd . Example 2.7.5. Creating CRUSH rules for erasure coded pools To add a CRUSH rule for use with an erasure coded pool, you might specify a rule name and an erasure code profile. Syntax Example Additional Resources See Erasure code profiles for more details. 2.7.6. Removing CRUSH rules To remove a rule, execute the following and specify the CRUSH rule name: Syntax 2.8. CRUSH tunables overview The Ceph project has grown exponentially with many changes and many new features. Beginning with the first commercially supported major release of Ceph, v0.48 (Argonaut), Ceph provides the ability to adjust certain parameters of the CRUSH algorithm, that is, the settings are not frozen in the source code. A few important points to consider: Adjusting CRUSH values might result in the shift of some PGs between storage nodes. If the Ceph cluster is already storing a lot of data, be prepared for some fraction of the data to move. The ceph-osd and ceph-mon daemons will start requiring the feature bits of new connections as soon as they receive an updated map. However, already-connected clients are effectively grandfathered in, and will misbehave if they do not support the new feature. Make sure when you upgrade your Ceph Storage Cluster daemons that you also update your Ceph clients. If the CRUSH tunables are set to non-legacy values and then later changed back to the legacy values, ceph-osd daemons will not be required to support the feature. However, the OSD peering process requires examining and understanding old maps. Therefore, you should not run old versions of the ceph-osd daemon if the cluster has previously used non-legacy CRUSH values, even if the latest version of the map has been switched back to using the legacy defaults. 2.8.1. CRUSH tuning Before you tune CRUSH, you should ensure that all Ceph clients and all Ceph daemons use the same version. If you have recently upgraded, ensure that you have restarted daemons and reconnected clients. The simplest way to adjust the CRUSH tunables is by changing to a known profile. Those are: legacy : The legacy behavior from v0.47 (pre-Argonaut) and earlier. argonaut : The legacy values supported by v0.48 (Argonaut) release. bobtail : The values supported by the v0.56 (Bobtail) release. firefly : The values supported by the v0.80 (Firefly) release. hammer : The values supported by the v0.94 (Hammer) release. jewel : The values supported by the v10.0.2 (Jewel) release. optimal : The current best values. default : The current default values for a new cluster. You can select a profile on a running cluster with the command: Syntax Note This might result in some data movement. Generally, you should set the CRUSH tunables after you upgrade, or if you receive a warning. Starting with version v0.74, Ceph issues a health warning if the CRUSH tunables are not set to their optimal values, the optimal values are the default as of v0.73. You can remove the warning by adjusting the tunables on the existing cluster. Note that this will result in some data movement (possibly as much as 10%). This is the preferred route, but should be taken with care on a production cluster where the data movement might affect performance. You can enable optimal tunables with: If things go poorly (for example, too much load) and not very much progress has been made, or there is a client compatibility problem (old kernel cephfs or rbd clients, or pre-bobtail librados clients), you can switch back to an earlier profile: Syntax For example, to restore the pre-v0.48 (Argonaut) values, execute: Example 2.8.2. CRUSH tuning, the hard way If you can ensure that all clients are running recent code, you can adjust the tunables by extracting the CRUSH map, modifying the values, and reinjecting it into the cluster. Extract the latest CRUSH map: Adjust tunables. These values appear to offer the best behavior for both large and small clusters we tested with. You will need to additionally specify the --enable-unsafe-tunables argument to crushtool for this to work. Please use this option with extreme care.: Reinject modified map: 2.8.3. CRUSH legacy values For reference, the legacy values for the CRUSH tunables can be set with: Again, the special --enable-unsafe-tunables option is required. Further, as noted above, be careful running old versions of the ceph-osd daemon after reverting to legacy values as the feature bit is not perfectly enforced. 2.9. Edit a CRUSH map Generally, modifying your CRUSH map at runtime with the Ceph CLI is more convenient than editing the CRUSH map manually. However, there are times when you might choose to edit it, such as changing the default bucket types, or using a bucket algorithm other than straw2 . To edit an existing CRUSH map: Getting the CRUSH map . Decompiling the CRUSH map . Edit at least one of the devices, and buckets and rules. Compile the CRUSH map Setting a CRUSH map . To activate a CRUSH Map rule for a specific pool, identify the common rule number and specify that rule number for the pool when creating the pool. 2.9.1. Getting the CRUSH map To get the CRUSH map for your cluster, execute the following: Syntax Ceph will output (-o) a compiled CRUSH map to the file name you specified. Since the CRUSH map is in a compiled form, you must decompile it first before you can edit it. 2.9.2. Decompiling the CRUSH map To decompile a CRUSH map, execute the following: Syntax Ceph decompiles (-d) the compiled CRUSH map and send the output (-o) to the file name you specified. 2.9.3. Setting a CRUSH map To set the CRUSH map for your cluster, execute the following: Syntax Ceph inputs the compiled CRUSH map of the file name you specified as the CRUSH map for the cluster. 2.9.4. Compiling the CRUSH map To compile a CRUSH map, execute the following: Syntax Ceph will store a compiled CRUSH map to the file name you specified. 2.10. CRUSH storage strategies examples If you want to have most pools default to OSDs backed by large hard drives, but have some pools mapped to OSDs backed by fast solid-state drives (SSDs). CRUSH can handle these scenarios easily. Use device classes. The process is simple to add a class to each device. Syntax Example Then, create rules to use the devices. Syntax Example Finally, set pools to use the rules. Syntax Example There is no need to manually edit the CRUSH map, because one hierarchy can serve multiple classes of devices.
[ "ceph osd crush tree --show-shadow ID CLASS WEIGHT TYPE NAME -24 ssd 4.54849 root default~ssd -19 ssd 0.90970 host ceph01~ssd 8 ssd 0.90970 osd.8 -20 ssd 0.90970 host ceph02~ssd 7 ssd 0.90970 osd.7 -21 ssd 0.90970 host ceph03~ssd 3 ssd 0.90970 osd.3 -22 ssd 0.90970 host ceph04~ssd 5 ssd 0.90970 osd.5 -23 ssd 0.90970 host ceph05~ssd 6 ssd 0.90970 osd.6 -2 hdd 50.94173 root default~hdd -4 hdd 7.27739 host ceph01~hdd 10 hdd 7.27739 osd.10 -12 hdd 14.55478 host ceph02~hdd 0 hdd 7.27739 osd.0 12 hdd 7.27739 osd.12 -6 hdd 14.55478 host ceph03~hdd 4 hdd 7.27739 osd.4 11 hdd 7.27739 osd.11 -10 hdd 7.27739 host ceph04~hdd 1 hdd 7.27739 osd.1 -8 hdd 7.27739 host ceph05~hdd 2 hdd 7.27739 osd.2 -1 55.49022 root default -3 8.18709 host ceph01 10 hdd 7.27739 osd.10 8 ssd 0.90970 osd.8 -11 15.46448 host ceph02 0 hdd 7.27739 osd.0 12 hdd 7.27739 osd.12 7 ssd 0.90970 osd.7 -5 15.46448 host ceph03 4 hdd 7.27739 osd.4 11 hdd 7.27739 osd.11 3 ssd 0.90970 osd.3 -9 8.18709 host ceph04 1 hdd 7.27739 osd.1 5 ssd 0.90970 osd.5 -7 8.18709 host ceph05 2 hdd 7.27739 osd.2 6 ssd 0.90970 osd.6", "[bucket-type] [bucket-name] { id [a unique negative numeric ID] weight [the relative capacity/capability of the item(s)] alg [the bucket type: uniform | list | tree | straw2 ] hash [the hash type: 0 by default] item [item-name] weight [weight] }", "host node1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 } host node2 { id -2 alg straw2 hash 0 item osd.2 weight 1.00 item osd.3 weight 1.00 } rack rack1 { id -3 alg straw2 hash 0 item node1 weight 2.00 item node2 weight 2.00 }", "root=default row=a rack=a2 chassis=a2a host=a2a1", "ceph osd crush add-bucket {name} {type}", "ceph osd crush add-bucket ssd-root root ceph osd crush add-bucket hdd-journal-root root ceph osd crush add-bucket hdd-root root", "added bucket ssd-root type root to crush map added bucket hdd-journal-root type root to crush map added bucket hdd-root type root to crush map", "ceph osd crush add-bucket ssd-row1 row ceph osd crush add-bucket ssd-row1-rack1 rack ceph osd crush add-bucket ssd-row1-rack1-host1 host ceph osd crush add-bucket ssd-row1-rack1-host2 host ceph osd crush add-bucket hdd-row1 row ceph osd crush add-bucket hdd-row1-rack2 rack ceph osd crush add-bucket hdd-row1-rack1-host1 host ceph osd crush add-bucket hdd-row1-rack1-host2 host ceph osd crush add-bucket hdd-row1-rack1-host3 host ceph osd crush add-bucket hdd-row1-rack1-host4 host", "ceph osd tree", "ceph osd crush move ssd-row1 root=ssd-root ceph osd crush move ssd-row1-rack1 row=ssd-row1 ceph osd crush move ssd-row1-rack1-host1 rack=ssd-row1-rack1 ceph osd crush move ssd-row1-rack1-host2 rack=ssd-row1-rack1", "ceph osd tree", "ceph osd crush remove {bucket-name}", "ceph osd crush rm {bucket-name}", "ceph osd crush tree -f json-pretty", "[ { \"id\": -2, \"name\": \"ssd\", \"type\": \"root\", \"type_id\": 10, \"items\": [ { \"id\": -6, \"name\": \"dell-per630-11-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 6, \"name\": \"osd.6\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] }, { \"id\": -7, \"name\": \"dell-per630-12-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 7, \"name\": \"osd.7\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] }, { \"id\": -8, \"name\": \"dell-per630-13-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 8, \"name\": \"osd.8\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] } ] }, { \"id\": -1, \"name\": \"default\", \"type\": \"root\", \"type_id\": 10, \"items\": [ { \"id\": -3, \"name\": \"dell-per630-11\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 0, \"name\": \"osd.0\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 3, \"name\": \"osd.3\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] }, { \"id\": -4, \"name\": \"dell-per630-12\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 1, \"name\": \"osd.1\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 4, \"name\": \"osd.4\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] }, { \"id\": -5, \"name\": \"dell-per630-13\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 2, \"name\": \"osd.2\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 5, \"name\": \"osd.5\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] } ] } ]", "ceph orch daemon add osd HOST :_DEVICE_,[ DEVICE ]", "ceph osd crush add ID_OR_NAME WEIGHT [ BUCKET_TYPE = BUCKET_NAME ...]", "ceph osd crush add osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1", "ceph osd crush set ID_OR_NAME WEIGHT root= POOL_NAME [ BUCKET_TYPE = BUCKET_NAME ...]", "ceph osd crush remove NAME", "ceph osd crush set-device-class CLASS OSD_ID [ OSD_ID ..]", "ceph osd crush set-device-class hdd osd.0 osd.1 ceph osd crush set-device-class ssd osd.2 osd.3 ceph osd crush set-device-class bucket-index osd.4", "ceph osd crush rm-device-class CLASS OSD_ID [ OSD_ID ..]", "ceph osd crush rm-device-class hdd osd.0 osd.1 ceph osd crush rm-device-class ssd osd.2 osd.3 ceph osd crush rm-device-class bucket-index osd.4", "ceph osd crush class rename OLD_NAME NEW_NAME", "ceph osd crush class rename hdd sas15k", "ceph osd crush class ls", "[ \"hdd\", \"ssd\", \"bucket-index\" ]", "ceph osd crush class ls-osd CLASS", "ceph osd crush class ls-osd hdd", "0 1 2 3 4 5 6", "ceph osd crush rule ls-by-class CLASS", "ceph osd crush rule ls-by-class hdd", "ceph osd crush reweight _NAME_ _WEIGHT_", "osd crush reweight-subtree NAME", "ceph osd reweight ID WEIGHT", "ceph osd reweight-by-utilization [THRESHOLD_] [ WEIGHT_CHANGE_AMOUNT ] [ NUMBER_OF_OSDS ] [--no-increasing]", "ceph osd test-reweight-by-utilization 110 .5 4 --no-increasing", "osd reweight-by-pg POOL_NAME", "osd crush reweight-all", "ceph osd primary-affinity OSD_ID WEIGHT", "rule <rulename> { id <unique number> type [replicated | erasure] min_size <min-size> max_size <max-size> step take <bucket-type> [class <class-name>] step [choose|chooseleaf] [firstn|indep] <N> <bucket-type> step emit }", "ceph osd crush rule list ceph osd crush rule ls", "ceph osd crush rule dump NAME", "ceph osd crush rule create-simple RUENAME ROOT BUCKET_NAME FIRSTN_OR_INDEP", "ceph osd crush rule create-simple deleteme default host firstn", "{ \"id\": 1, \"rule_name\": \"deleteme\", \"type\": 1, \"min_size\": 1, \"max_size\": 10, \"steps\": [ { \"op\": \"take\", \"item\": -1, \"item_name\": \"default\"}, { \"op\": \"chooseleaf_firstn\", \"num\": 0, \"type\": \"host\"}, { \"op\": \"emit\"}]}", "ceph osd crush rule create-replicated NAME ROOT FAILURE_DOMAIN CLASS", "ceph osd crush rule create-replicated fast default host ssd", "ceph osd crush rule create-erasure RULE_NAME PROFILE_NAME", "ceph osd crush rule create-erasure default default", "ceph osd crush rule rm NAME", "ceph osd crush tunables PROFILE", "ceph osd crush tunables optimal", "ceph osd crush tunables PROFILE", "ceph osd crush tunables legacy", "ceph osd getcrushmap -o /tmp/crush", "crushtool -i /tmp/crush --set-choose-local-tries 0 --set-choose-local-fallback-tries 0 --set-choose-total-tries 50 -o /tmp/crush.new", "ceph osd setcrushmap -i /tmp/crush.new", "crushtool -i /tmp/crush --set-choose-local-tries 2 --set-choose-local-fallback-tries 5 --set-choose-total-tries 19 --set-chooseleaf-descend-once 0 --set-chooseleaf-vary-r 0 -o /tmp/crush.legacy", "ceph osd getcrushmap -o COMPILED_CRUSHMAP_FILENAME", "crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME", "ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME", "crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME", "ceph osd crush set-device-class CLASS OSD_ID [ OSD_ID ]", "ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5 ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7", "ceph osd crush rule create-replicated RULENAME ROOT FAILURE_DOMAIN_TYPE DEVICE_CLASS", "ceph osd crush rule create-replicated cold default host hdd ceph osd crush rule create-replicated hot default host ssd", "ceph osd pool set POOL_NAME crush_rule RULENAME", "ceph osd pool set cold crush_rule hdd ceph osd pool set hot crush_rule ssd", "device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class ssd device 3 osd.3 class ssd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class ssd device 7 osd.7 class ssd host ceph-osd-server-1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 item osd.2 weight 1.00 item osd.3 weight 1.00 } host ceph-osd-server-2 { id -2 alg straw2 hash 0 item osd.4 weight 1.00 item osd.5 weight 1.00 item osd.6 weight 1.00 item osd.7 weight 1.00 } root default { id -3 alg straw2 hash 0 item ceph-osd-server-1 weight 4.00 item ceph-osd-server-2 weight 4.00 } rule cold { ruleset 0 type replicated min_size 2 max_size 11 step take default class hdd step chooseleaf firstn 0 type host step emit } rule hot { ruleset 1 type replicated min_size 2 max_size 11 step take default class ssd step chooseleaf firstn 0 type host step emit }" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/storage_strategies_guide/crush-admin-overview_strategy
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_hybrid_and_multicloud_resources/providing-feedback-on-red-hat-documentation_rhodf
4.3. Confined and Unconfined Users
4.3. Confined and Unconfined Users Each Linux user is mapped to an SELinux user via SELinux policy. This allows Linux users to inherit the restrictions on SELinux users. This Linux user mapping is seen by running the semanage login -l command as the Linux root user: In Red Hat Enterprise Linux 6, Linux users are mapped to the SELinux __default__ login by default, which is mapped to the SELinux unconfined_u user. The following line defines the default mapping: The following procedure demonstrates how to add a new Linux user to the system and how to map that user to the SELinux unconfined_u user. It assumes that the Linux root user is running unconfined, as it does by default in Red Hat Enterprise Linux 6: As the Linux root user, run the useradd newuser command to create a new Linux user named newuser . As the Linux root user, run the passwd newuser command to assign a password to the Linux newuser user: Log out of your current session, and log in as the Linux newuser user. When you log in, the pam_selinux PAM module automatically maps the Linux user to an SELinux user (in this case, unconfined_u ), and sets up the resulting SELinux context. The Linux user's shell is then launched with this context. Run the id -Z command to view the context of a Linux user: Note If you no longer need the newuser user on your system, log out of the Linux newuser 's session, log in with your account, and run the userdel -r newuser command as the Linux root user. It will remove newuser along with their home directory. Confined and unconfined Linux users are subject to executable and writable memory checks, and are also restricted by MCS or MLS. If an unconfined Linux user executes an application that SELinux policy defines as one that can transition from the unconfined_t domain to its own confined domain, the unconfined Linux user is still subject to the restrictions of that confined domain. The security benefit of this is that, even though a Linux user is running unconfined, the application remains confined. Therefore, the exploitation of a flaw in the application can be limited by the policy. Similarly, we can apply these checks to confined users. However, each confined Linux user is restricted by a confined user domain against the unconfined_t domain. The SELinux policy can also define a transition from a confined user domain to its own target confined domain. In such a case, confined Linux users are subject to the restrictions of that target confined domain. The main point is that special privileges are associated with the confined users according to their role. In the table below, you can see examples of basic confined domains for Linux users in Red Hat Enterprise Linux 6: Table 4.1. SELinux User Capabilities User Role Domain X Window System su or sudo Execute in home directory and /tmp/ (default) Networking sysadm_u sysadm_r sysadm_t yes su and sudo yes yes staff_u staff_r staff_t yes only sudo yes yes user_u user_r user_t yes no yes yes guest_u guest_r guest_t no no no no xguest_u xguest_r xguest_t yes no no Firefox only Linux users in the user_t , guest_t , and xguest_t domains can only run set user ID (setuid) applications if SELinux policy permits it (for example, passwd ). These users cannot run the su and sudo setuid applications, and therefore cannot use these applications to become the Linux root user. Linux users in the sysadm_t , staff_t , user_t , and xguest_t domains can log in via the X Window System and a terminal. By default, Linux users in the guest_t and xguest_t domains cannot execute applications in their home directories or /tmp/ , preventing them from executing applications, which inherit users' permissions, in directories they have write access to. This helps prevent flawed or malicious applications from modifying users' files. By default, Linux users in the staff_t and user_t domains can execute applications in their home directories and /tmp/ . Refer to Section 6.6, "Booleans for Users Executing Applications" for information about allowing and preventing users from executing applications in their home directories and /tmp/ . The only network access Linux users in the xguest_t domain have is Firefox connecting to web pages. Alongside with the already mentioned SELinux users, there are special roles, that can be mapped to those users. These roles determine what SELinux allows the user to do: webadm_r can only administrate SELinux types related to the Apache HTTP Server. See chapter Apache HTTP Server in the Managing Confined Services guide for further information. dbadm_r can only administrate SELinux types related to the MariaDB database and the PostgreSQL database management system. See chapters MySQL and PostgreSQL in the Managing Confined Services guide for further information. logadm_r can only administrate SELinux types related to the syslog and auditlog processes. secadm_r can only administrate SELinux. auditadm_r can only administrate processes related to the audit subsystem. To list all available roles, run the following command: Note that the seinfo command is provided by the setools-console package, which is not installed by default.
[ "~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u s0-s0:c0.c1023 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023", "__default__ unconfined_u s0-s0:c0.c1023", "~]# passwd newuser Changing password for user newuser. New UNIX password: Enter a password Retype new UNIX password: Enter the same password again passwd: all authentication tokens updated successfully.", "[newuser@localhost ~]USD id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "~]USD seinfo -r" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-Security-Enhanced_Linux-Targeted_Policy-Confined_and_Unconfined_Users
Chapter 2. Installing Debezium connectors
Chapter 2. Installing Debezium connectors Install Debezium connectors through Streams for Apache Kafka by extending Kafka Connect with connector plug-ins. Following a deployment of Streams for Apache Kafka, you can deploy Debezium as a connector configuration through Kafka Connect. 2.1. Kafka topic creation recommendations Debezium stores data in multiple Apache Kafka topics. The topics must either be created in advance by an administrator, or you can configure Kafka Connect to configure topics automatically . The following list describes limitations and recommendations to consider when creating topics: Database schema history topics for the Debezium Db2, MySQL, Oracle, and SQL Server connectors For each of the preceding connectors a database schema history topic is required. Whether you manually create the database schema history topic, use the Kafka broker to create the topic automatically, or use Kafka Connect to create the topic , ensure that the topic is configured with the following settings: Infinite or very long retention. Replication factor of at least three in production environments. Single partition. Other topics When you enable Kafka log compaction so that only the last change event for a given record is saved, set the following topic properties in Apache Kafka: min.compaction.lag.ms delete.retention.ms To ensure that topic consumers have enough time to receive all events and delete markers, specify values for the preceding properties that are larger than the maximum downtime that you expect for your sink connectors. For example, consider the downtime that might occur when you apply updates to sink connectors. Replicated in production. Single partition. You can relax the single partition rule, but your application must handle out-of-order events for different rows in the database. Events for a single row are still totally ordered. If you use multiple partitions, the default behavior is that Kafka determines the partition by hashing the key. Other partition strategies require the use of single message transformations (SMTs) to set the partition number for each record. 2.2. Debezium deployment on Streams for Apache Kafka To set up connectors for Debezium on Red Hat OpenShift Container Platform, you use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in for each connector that you want to use. After the connector starts, it connects to the configured database and generates change event records for each inserted, updated, and deleted row or document. Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in. During the deployment process, you create and use the following custom resources (CRs): A KafkaConnect CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. A KafkaConnector CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying the KafkaConnector CR. In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image. The spec.build.output parameter in the KafkaConnect CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically. Note If you use a KafkaConnect resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information. Additional resources Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift. Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift. 2.2.1. Deploying Debezium with Streams for Apache Kafka You follow the same steps to deploy each type of Debezium connector. The following section describes how to deploy a Debezium MySQL connector. With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use. During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server. The newly created container is pushed to the container registry that is specified in .spec.build.output , and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector custom resources to start the connectors that are included in the build. Prerequisites You have access to an OpenShift cluster on which the cluster Operator is installed. The Streams for Apache Kafka Operator is running. An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift . Kafka Connect is deployed on Streams for Apache Kafka You have a Red Hat build of Debezium license. The OpenShift oc CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource: To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub An account and permissions to create and manage images in the registry. To store the build image as a native OpenShift ImageStream An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams in the OpenShift Container Platform documentation. Procedure Log in to the OpenShift cluster. Create a Debezium KafkaConnect custom resource (CR) for the connector, or modify an existing one. For example, create a KafkaConnect CR with the name dbz-connect.yaml that specifies the metadata.annotations and spec.build properties. The following example shows an excerpt from a dbz-connect.yaml file that describes a KafkaConnect custom resource. Example 2.1. A dbz-connect.yaml file that defines a KafkaConnect custom resource that includes a Debezium connector In the example that follows, the custom resource is configured to download the following artifacts: The Debezium connector archive. The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector. The Debezium scripting SMT archive and the associated scripting engine that you want to use with the Debezium connector. The SMT archive and scripting language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT . apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-mysql artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mysql/2.7.3.Final-redhat-00001/debezium-connector-mysql-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat- <build-number> /apicurio-registry-distro-connect-converter-2.4.4.Final-redhat- <build-number> .zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ... Table 2.1. Descriptions of Kafka Connect configuration settings Item Description 1 Sets the strimzi.io/use-connector-resources annotation to "true" to enable the Cluster Operator to use KafkaConnector resources to configure connectors in this Kafka Connect cluster. 2 The spec.build configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts. 3 The build.output specifies the registry in which the newly built image is stored. 4 Specifies the name and image name for the image output. Valid values for output.type are docker to push into a container registry such as Docker Hub or Quay, or imagestream to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying the build.output in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}. 5 The plugins configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-in name , and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component. 6 The value of artifacts.type specifies the file type of the artifact specified in the artifacts.url . Valid types are zip , tgz , or jar . Debezium connector archives are provided in .zip file format. The type value must match the type of the file that is referenced in the url field. 7 The value of artifacts.url specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server. 8 (Optional) Specifies the artifact type and url for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter. 9 (Optional) Specifies the artifact type and url for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy. 10 (Optional) Specifies the artifact type and url for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT. Important If you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components artifacts.url must specify the location of a JAR file, and the value of artifacts.type must also be set to jar . Invalid values cause the connector fails at runtime. To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries: groovy groovy-jsr223 (scripting agent) groovy-json (module for parsing JSON strings) As an alternative, the Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript. Apply the KafkaConnect build specification to the OpenShift cluster by entering the following command: oc create -f dbz-connect.yaml Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy. After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster. Create a KafkaConnector resource to define an instance of each connector that you want to deploy. For example, create the following KafkaConnector CR, and save it as mysql-inventory-connector.yaml Example 2.2. mysql-inventory-connector.yaml file that defines the KafkaConnector custom resource for a Debezium connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-mysql 1 spec: class: io.debezium.connector.mysql.MySqlConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: mysql.debezium-mysql.svc.cluster.local 5 database.port: 3306 6 database.user: debezium 7 database.password: dbz 8 database.server.id: 184054 9 topic.prefix: inventory-connector-mysql 10 table.include.list: inventory.* 11 ... Table 2.2. Descriptions of connector configuration settings Item Description 1 The name of the connector to register with the Kafka Connect cluster. 2 The name of the connector class. 3 The number of tasks that can operate concurrently. 4 The connector's configuration. 5 The address of the host database instance. 6 The port number of the database instance. 7 The name of the account that Debezium uses to connect to the database. 8 The password that Debezium uses to connect to the database user account. 9 Unique numeric ID of the connector. 10 The topic prefix for the database instance or cluster. The specified name must be formed only from alphanumeric characters or underscores. Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster. This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector . 11 The list of tables from which the connector captures change events. Create the connector resource by running the following command: oc create -n <namespace> -f <kafkaConnector> .yaml For example, oc create -n debezium -f mysql-inventory-connector.yaml The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by spec.config.database.dbname in the KafkaConnector CR. After the connector pod is ready, Debezium is running. You are now ready to verify the Debezium deployment . 2.2.2. Verifying that the Debezium connector is running If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database. To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc): Verify the connector status. Verify that the connector generates topics. Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table. Prerequisites A Debezium connector is deployed to Streams for Apache Kafka on OpenShift. The OpenShift oc CLI client is installed. You have access to the OpenShift Container Platform web console. Procedure Check the status of the KafkaConnector resource by using one of the following methods: From the OpenShift Container Platform web console: Navigate to Home Search . On the Search page, click Resources to open the Select Resource box, and then type KafkaConnector . From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-mysql . In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True . From a terminal window: Enter the following command: oc describe KafkaConnector <connector-name> -n <project> For example, oc describe KafkaConnector inventory-connector-mysql -n debezium The command returns status information that is similar to the following output: Example 2.3. KafkaConnector resource status Name: inventory-connector-mysql Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-mysql Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-mysql.inventory inventory-connector-mysql.inventory.addresses inventory-connector-mysql.inventory.customers inventory-connector-mysql.inventory.geom inventory-connector-mysql.inventory.orders inventory-connector-mysql.inventory.products inventory-connector-mysql.inventory.products_on_hand Events: <none> Verify that the connector created Kafka topics: From the OpenShift Container Platform web console. Navigate to Home Search . On the Search page, click Resources to open the Select Resource box, and then type KafkaTopic . From the KafkaTopics list, click the name of the topic that you want to check, for example, inventory-connector-mysql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d . In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True . From a terminal window: Enter the following command: oc get kafkatopics The command returns status information that is similar to the following output: Example 2.4. KafkaTopic resource status Check topic content. From a terminal window, enter the following command: oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic= <topic-name > For example, oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-mysql.inventory.products_on_hand The format for specifying the topic name is the same as the oc describe command returns in Step 1, for example, inventory-connector-mysql.inventory.addresses . For each event in the topic, the command returns information that is similar to the following output: Example 2.5. Content of a Debezium change event In the preceding example, the payload value shows that the connector snapshot generated a read ( "op" ="r" ) event from the table inventory.products_on_hand . The "before" state of the product_id record is null , indicating that no value exists for the record. The "after" state shows a quantity of 3 for the item with product_id 101 . You can run Debezium with multiple Kafka Connect service clusters and multiple Kafka clusters. The number of connectors that you can deploy to a Kafka Connect cluster depends on the volume and rate of database events. steps For more information about deploying specific connectors, see the following topics in the Debezium User Guide: Deploying the Db2 connector Deploying the MariaDB connector Deploying the MongoDB connector Deploying the MySQL connector Deploying the Oracle connector Deploying the PostgreSQL connector Deploying the SQL Server connector
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-mysql artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mysql/2.7.3.Final-redhat-00001/debezium-connector-mysql-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat- <build-number> /apicurio-registry-distro-connect-converter-2.4.4.Final-redhat- <build-number> .zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093", "create -f dbz-connect.yaml", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-mysql 1 spec: class: io.debezium.connector.mysql.MySqlConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: mysql.debezium-mysql.svc.cluster.local 5 database.port: 3306 6 database.user: debezium 7 database.password: dbz 8 database.server.id: 184054 9 topic.prefix: inventory-connector-mysql 10 table.include.list: inventory.* 11", "create -n <namespace> -f <kafkaConnector> .yaml", "create -n debezium -f mysql-inventory-connector.yaml", "describe KafkaConnector <connector-name> -n <project>", "describe KafkaConnector inventory-connector-mysql -n debezium", "Name: inventory-connector-mysql Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-mysql Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-mysql.inventory inventory-connector-mysql.inventory.addresses inventory-connector-mysql.inventory.customers inventory-connector-mysql.inventory.geom inventory-connector-mysql.inventory.orders inventory-connector-mysql.inventory.products inventory-connector-mysql.inventory.products_on_hand Events: <none>", "get kafkatopics", "NAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-mysql--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True", "exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh > --bootstrap-server localhost:9092 > --from-beginning > --property print.key=true > --topic= <topic-name >", "exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh > --bootstrap-server localhost:9092 > --from-beginning > --property print.key=true > --topic=inventory-connector-mysql.inventory.products_on_hand", "{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"}],\"optional\":false,\"name\":\"inventory-connector-mysql.inventory.products_on_hand.Key\"},\"payload\":{\"product_id\":101}} {\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"inventory-connector-mysql.inventory.products_on_hand.Value\",\"field\":\"before\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"product_id\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"quantity\"}],\"optional\":true,\"name\":\"inventory-connector-mysql.inventory.products_on_hand.Value\",\"field\":\"after\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"version\"},{\"type\":\"string\",\"optional\":false,\"field\":\"connector\"},{\"type\":\"string\",\"optional\":false,\"field\":\"name\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"ts_ms\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"ts_us\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"ts_ns\"},{\"type\":\"string\",\"optional\":true,\"name\":\"io.debezium.data.Enum\",\"version\":1,\"parameters\":{\"allowed\":\"true,last,false\"},\"default\":\"false\",\"field\":\"snapshot\"},{\"type\":\"string\",\"optional\":false,\"field\":\"db\"},{\"type\":\"string\",\"optional\":true,\"field\":\"sequence\"},{\"type\":\"string\",\"optional\":true,\"field\":\"table\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"server_id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"gtid\"},{\"type\":\"string\",\"optional\":false,\"field\":\"file\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"pos\"},{\"type\":\"int32\",\"optional\":false,\"field\":\"row\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"thread\"},{\"type\":\"string\",\"optional\":true,\"field\":\"query\"}],\"optional\":false,\"name\":\"io.debezium.connector.mysql.Source\",\"field\":\"source\"},{\"type\":\"string\",\"optional\":false,\"field\":\"op\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"ts_ms\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"ts_us\"},{\"type\":\"int64\",\"optional\":true,\"field\":\"ts_ns\"},{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"id\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"total_order\"},{\"type\":\"int64\",\"optional\":false,\"field\":\"data_collection_order\"}],\"optional\":true,\"field\":\"transaction\"}],\"optional\":false,\"name\": \"inventory-connector-mysql.inventory.products_on_hand.Envelope\" }, \"payload\" :{ \"before\" : null , \"after\" :{ \"product_id\":101,\"quantity\":3 },\"source\":{\"version\":\"2.7.3.Final-redhat-00001\",\"connector\":\"mysql\",\"name\":\"inventory-connector-mysql\",\"ts_ms\":1638985247805,\"ts_us\":1638985247805000000,\"ts_ns\":1638985247805000000,\"snapshot\":\"true\",\"db\":\"inventory\",\"sequence\":null,\"table\":\"products_on_hand\",\"server_id\":0,\"gtid\":null,\"file\":\"mysql-bin.000003\",\"pos\":156,\"row\":0,\"thread\":null,\"query\":null}, \"op\" : \"r\" ,\"ts_ms\":1638985247805,\"ts_us\":1638985247805102,\"ts_ns\":1638985247805102588,\"transaction\":null}}" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/installing_debezium_on_openshift/installing-debezium-connectors-debezium
6.13. Exporting and Importing Virtual Machines and Templates
6.13. Exporting and Importing Virtual Machines and Templates Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See the Importing Existing Storage Domains section in the Red Hat Virtualization Administration Guide for information on importing storage domains. You can export virtual machines and templates from, and import them to, data centers in the same or different Red Hat Virtualization environment. You can export or import virtual machines by using an export domain, a data domain, or by using a Red Hat Virtualization host. When you export or import a virtual machine or template, properties including basic details such as the name and description, resource allocation, and high availability settings of that virtual machine or template are preserved. The permissions and user roles of virtual machines and templates are included in the OVF files, so that when a storage domain is detached from one data center and attached to another, the virtual machines and templates can be imported with their original permissions and user roles. In order for permissions to be registered successfully, the users and roles related to the permissions of the virtual machines or templates must exist in the data center before the registration process. You can also use the V2V feature to import virtual machines from other virtualization providers, such as RHEL 5 Xen or VMware, or import Windows virtual machines. V2V converts virtual machines so that they can be hosted by Red Hat Virtualization. For more information on installing and using V2V, see Converting Virtual Machines from Other Hypervisors to KVM with virt-v2v . Important Virtual machines must be shut down before being exported or imported. 6.13.1. Exporting a Virtual Machine to the Export Domain Export a virtual machine to the export domain so that it can be imported into a different data center. Before you begin, the export domain must be attached to the data center that contains the virtual machine to be exported. Warning The virtual machine must be shut down before being exported. Exporting a Virtual Machine to the Export Domain Click Compute Virtual Machines and select a virtual machine. Click More Actions ( ), then click Export to Export Domain . Optionally, select the following check boxes in the Export Virtual Machine window: Force Override : overrides existing images of the virtual machine on the export domain. Collapse Snapshots : creates a single export volume per disk. This option removes snapshot restore points and includes the template in a template-based virtual machine, and removes any dependencies a virtual machine has on a template. For a virtual machine that is dependent on a template, either select this option, export the template with the virtual machine, or make sure the template exists in the destination data center. Note When you create a virtual machine from a template by clicking Compute Templates and clicking New VM , you wll see two storage allocation options in the Storage Allocation section in the Resource Allocation tab: If Clone is selected, the virtual machine is not dependent on the template. The template does not have to exist in the destination data center. If Thin is selected, the virtual machine is dependent on the template, so the template must exist in the destination data center or be exported with the virtual machine. Alternatively, select the Collapse Snapshots check box to collapse the template disk and virtual disk into a single disk. To check which option was selected, click a virtual machine's name and click the General tab in the details view. Click OK . The export of the virtual machine begins. The virtual machine displays in Compute Virtual Machines with an Image Locked status while it is exported. Depending on the size of your virtual machine hard disk images, and your storage hardware, this can take up to an hour. Click the Events tab to view progress. When complete, the virtual machine has been exported to the export domain and displays in the VM Import tab of the export domain's details view. 6.13.2. Exporting a Virtual Machine to a Data Domain You can export a virtual machine to a data domain to do any of the following: Migrate the virtual machine or its clone to another data center. Store a clone of the virtual machine as a backup. Warning You cannot export a running virtual machine. Shut down the virtual machine before exporting it. Prerequisite The data domain is attached to a data center. Procedure Click Compute Virtual Machines and select a virtual machine. Click the Disks tab. Select all disks belonging to the virtual machine. Click More Actions ( ), then click Move . Under Target , select the domain. Click OK . The disks migrate to the new domain. Note When you move a disk from one type of data domain another, the disk format changes accordingly. For example, if the disk is on an NFS data domain, and it is in sparse format, then if you move the disk to an iSCSI domain its format changes to preallocated. This is different from using an export domain, because an export domain is NFS. 6.13.3. Importing a Virtual Machine from the Export Domain You have a virtual machine on an export domain. Before the virtual machine can be imported to a new data center, the export domain must be attached to the destination data center. Importing a Virtual Machine into the Destination Data Center Click Storage Domains and select the export domain. The export domain must have a status of Active . Click the export domain's name to go to the details view. Click the VM Import tab to list the available virtual machines to import. Select one or more virtual machines to import and click Import . Select the Target Cluster . Select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines. Click the virtual machine to be imported and click on the Disks sub-tab. From this tab, you can use the Allocation Policy and Storage Domain drop-down lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and can also select the storage domain on which the disk will be stored. An icon is also displayed to indicate which of the disks to be imported acts as the boot disk for that virtual machine. Click OK to import the virtual machines. The Import Virtual Machine Conflict window opens if the virtual machine exists in the virtualized environment. Choose one of the following radio buttons: Don't import Import as cloned and enter a unique name for the virtual machine in the New Name field. Optionally select the Apply to all check box to import all duplicated virtual machines with the same suffix, and then enter a suffix in the Suffix to add to the cloned VMs field. Click OK . Important During a single import operation, you can only import virtual machines that share the same architecture. If any of the virtual machines to be imported have a different architecture to that of the other virtual machines to be imported, a warning will display and you will be prompted to change your selection so that only virtual machines with the same architecture will be imported. 6.13.4. Importing a Virtual Machine from a VMware Provider Import virtual machines from a VMware vCenter provider to your Red Hat Virtualization environment. You can import from a VMware provider by entering its details in the Import Virtual Machine(s) window during each import operation, or you can add the VMware provider as an external provider, and select the preconfigured provider during import operations. To add an external provider, see Adding a VMware Instance as a Virtual Machine Provider . Red Hat Virtualization uses V2V to import VMware virtual machines. For OVA files, the only disk format Red Hat Virtualization supports is VMDK. The virt-v2v package must be installed on at least one host (referred to in this procedure as the proxy host). The virt-v2v package is available by default on Red Hat Virtualization Hosts (RHVH) and is installed on Red Hat Enterprise Linux hosts as a dependency of VDSM when added to the Red Hat Virtualization environment. Red Hat Enterprise Linux hosts must be Red Hat Enterprise Linux 7.2 or later. Note The virt-v2v package is not available on the ppc64le architecture and these hosts cannot be used as proxy hosts. Warning The virtual machine must be shut down before being imported. Starting the virtual machine through VMware during the import process can result in data corruption. Important An import operation can only include virtual machines that share the same architecture. If any virtual machine to be imported has a different architecture, a warning will display and you will be prompted to change your selection to include only virtual machines with the same architecture. Note If the import fails, refer to the relevant log file in /var/log/vdsm/import/ and to /var/log/vdsm/vdsm.log on the proxy host for details. Importing a Virtual Machine from VMware Click Compute Virtual Machines . Click More Actions ( ), then click Import to open the Import Virtual Machine(s) window. Select VMware from the Source list. If you have configured a VMware provider as an external provider, select it from the External Provider list. Verify that the provider credentials are correct. If you did not specify a destination data center or proxy host when configuring the external provider, select those options now. If you have not configured a VMware provider, or want to import from a new VMware provider, provide the following details: Select from the list the Data Center in which the virtual machine will be available. Enter the IP address or fully qualified domain name of the VMware vCenter instance in the vCenter field. Enter the IP address or fully qualified domain name of the host from which the virtual machines will be imported in the ESXi field. Enter the name of the data center and the cluster in which the specified ESXi host resides in the Data Center field. If you have exchanged the SSL certificate between the ESXi host and the Manager, leave Verify server's SSL certificate checked to verify the ESXi host's certificate. If not, uncheck the option. Enter the Username and Password for the VMware vCenter instance. The user must have access to the VMware data center and ESXi host on which the virtual machines reside. Select a host in the chosen data center with virt-v2v installed to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the VMware vCenter external provider. Click Load to list the virtual machines on the VMware provider that can be imported. Select one or more virtual machines from the Virtual Machines on Source list, and use the arrows to move them to the Virtual Machines to Import list. Click . Note If a virtual machine's network device uses the driver type e1000 or rtl8139, the virtual machine will use the same driver type after it has been imported to Red Hat Virtualization. If required, you can change the driver type to VirtIO manually after the import. To change the driver type after a virtual machine has been imported, see Section 5.2.2, "Editing a Network Interface" . If the network device uses driver types other than e1000 or rtl8139, the driver type is changed to VirtIO automatically during the import. The Attach VirtIO-drivers option allows the VirtIO drivers to be injected to the imported virtual machine files so that when the driver is changed to VirtIO, the device will be properly detected by the operating system. Select the Cluster in which the virtual machines will reside. Select a CPU Profile for the virtual machines. Select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines. Select the Clone check box to change the virtual machine name and MAC addresses, and clone all disks, removing all snapshots. If a virtual machine appears with a warning symbol beside its name or has a tick in the VM in System column, you must clone the virtual machine and change its name. Click on each virtual machine to be imported and click on the Disks sub-tab. Use the Allocation Policy and Storage Domain lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and select the storage domain on which the disk will be stored. An icon is also displayed to indicate which of the disks to be imported acts as the boot disk for that virtual machine. If you selected the Clone check box, change the name of the virtual machine in the General sub-tab. Click OK to import the virtual machines. The CPU type of the virtual machine must be the same as the CPU type of the cluster into which it is being imported. To view the cluster's CPU Type in the Administration Portal: Click Compute Clusters . Select a cluster. Click Edit . Click the General tab. If the CPU type of the virtual machine is different, configure the imported virtual machine's CPU type: Click Compute Virtual Machines . Select the virtual machine. Click Edit . Click the System tab. Click the Advanced Parameters arrow. Specify the Custom CPU Type and click OK . 6.13.5. Exporting a Virtual Machine to a Host You can export a virtual machine to a specific path or mounted NFS shared storage on a host in the Red Hat Virtualization data center. The export will produce an Open Virtual Appliance (OVA) package. Warning The virtual machine must be shut down before being exported. Exporting a Virtual Machine to a Host Click Compute Virtual Machines and select a virtual machine. Click More Actions ( ), then click Export to OVA . Select the host from the Host drop-down list. Enter the absolute path to the export directory in the Directory field, including the trailing slash. For example: /images2/ova/ Optionally change the default name of the file in the Name field. Click OK The status of the export can be viewed in the Events tab. 6.13.6. Importing a Virtual Machine from a Host Import an Open Virtual Appliance (OVA) file into your Red Hat Virtualization environment. You can import the file from any Red Hat Virtualization Host in the data center. Important Currently, only Red Hat Virtualization and VMware OVAs can be imported. KVM and Xen are not supported. The import process uses virt-v2v . Only virtual machines running operating systems compatible with virt-v2v can be successfully imported. For a current list of compatible operating systems, see https://access.redhat.com/articles/1351473 . Importing an OVA File Copy the OVA file to a host in your cluster, in a file system location such as var/tmp . Note The location can be a local directory or a remote nfs mount, as long as it has sufficient space and is accessible to the qemu user (UID 36). Ensure that the OVA file has permissions allowing read/write access to the qemu user (UID 36) and the kvm group (GID 36): Click Compute Virtual Machines . Click More Actions ( ), then click Import to open the Import Virtual Machine(s) window. Select Virtual Appliance (OVA) from the Source list. Select a host from the Host list. In the Path field, specify the absolute path of the OVA file. Click Load to list the virtual machine to be imported. Select the virtual machine from the Virtual Machines on Source list, and use the arrows to move it to the Virtual Machines to Import list. Click . Select the Storage Domain for the virtual machine. Select the Target Cluster where the virtual machines will reside. Select the CPU Profile for the virtual machines. Select the Allocation Policy for the virtual machines. Optionally, select the Attach VirtIO-Drivers check box and select the appropriate image on the list to add VirtIO drivers. Select the Allocation Policy for the virtual machines. Select the virtual machine, and on the General tab select the Operating System . On the Network Interfaces tab, select the Network Name and Profile Name . Click the Disks tab to view the Alias , Virtual Size , and Actual Size of the virtual machine. Click OK to import the virtual machines. 6.13.7. Importing a Virtual Machine from a RHEL 5 Xen Host Import virtual machines from Xen on Red Hat Enterprise Linux 5 to your Red Hat Virtualization environment. Red Hat Virtualization uses V2V to import QCOW2 or raw virtual machine disk formats. The virt-v2v package must be installed on at least one host (referred to in this procedure as the proxy host). The virt-v2v package is available by default on Red Hat Virtualization Hosts (RHVH) and is installed on Red Hat Enterprise Linux hosts as a dependency of VDSM when added to the Red Hat Virtualization environment. Red Hat Enterprise Linux hosts must be Red Hat Enterprise Linux 7.2 or later. Warning If you are importing a Windows virtual machine from a RHEL 5 Xen host and you are using VirtIO devices, install the VirtIO drivers before importing the virtual machine. If the drivers are not installed, the virtual machine may not boot after import. The VirtIO drivers can be installed from the virtio-win.iso or the RHV-toolsSetup _version .iso . See Section 3.3.2, "Installing the Guest Agents, Tools, and Drivers on Windows" for details. If you are not using VirtIO drivers, review the configuration of the virutal machine before first boot to ensure that VirtIO devices are not being used. Note The virt-v2v package is not available on the ppc64le architecture and these hosts cannot be used as proxy hosts. Important An import operation can only include virtual machines that share the same architecture. If any virtual machine to be imported has a different architecture, a warning will display and you will be prompted to change your selection to include only virtual machines with the same architecture. Note If the import fails, refer to the relevant log file in /var/log/vdsm/import/ and to /var/log/vdsm/vdsm.log on the proxy host for details. Importing a Virtual Machine from RHEL 5 Xen Shut down the virtual machine. Starting the virtual machine through Xen during the import process can result in data corruption. Enable public key authentication between the proxy host and the RHEL 5 Xen host: Log in to the proxy host and generate SSH keys for the vdsm user. Copy the vdsm user's public key to the RHEL 5 Xen host. Log in to the RHEL 5 Xen host to verify that the login works correctly. Log in to the Administration Portal. Click Compute Virtual Machines . Click More Actions ( ), then click Import to open the Import Virtual Machine(s) window. Select the Data Center that contains the proxy host. Select XEN (via RHEL) from the Source drop-down list. Optionally, select a RHEL 5 Xen External Provider from the drop-down list. The URI will be pre-filled with the correct URI. See Adding a RHEL 5 Xen Host as a Virtual Machine Provider in the Administration Guide for more information. Enter the URI of the RHEL 5 Xen host. The required format is pre-filled; you must replace <hostname> with the host name of the RHEL 5 Xen host. Select the proxy host from the Proxy Host drop-down list. Click Load to list the virtual machines on the RHEL 5 Xen host that can be imported. Select one or more virtual machines from the Virtual Machines on Source list, and use the arrows to move them to the Virtual Machines to Import list. Note Due to current limitations, Xen virtual machines with block devices do not appear in the Virtual Machines on Source list. They must be imported manually. See Importing a Block-Based Virtual Machine from a RHEL 5 Xen Host . Click . Select the Cluster in which the virtual machines will reside. Select a CPU Profile for the virtual machines. Use the Allocation Policy and Storage Domain lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and select the storage domain on which the disk will be stored. Note The target storage domain must be a file-based domain. Due to current limitations, specifying a block-based domain causes the V2V operation to fail. If a virtual machine appears with a warning symbol beside its name, or has a tick in the VM in System column, select the Clone check box to clone the virtual machine. Note Cloning a virtual machine changes its name and MAC addresses and clones all of its disks, removing all snapshots. Click OK to import the virtual machines. The CPU type of the virtual machine must be the same as the CPU type of the cluster into which it is being imported. To view the cluster's CPU Type in the Administration Portal: Click Compute Clusters . Select a cluster. Click Edit . Click the General tab. If the CPU type of the virtual machine is different, configure the imported virtual machine's CPU type: Click Compute Virtual Machines . Select the virtual machine. Click Edit . Click the System tab. Click the Advanced Parameters arrow. Specify the Custom CPU Type and click OK . Importing a Block-Based Virtual Machine from a RHEL 5 Xen Host Enable public key authentication between the proxy host and the RHEL 5 Xen host: Log in to the proxy host and generate SSH keys for the vdsm user. Copy the vdsm user's public key to the RHEL 5 Xen host. Log in to the RHEL 5 Xen host to verify that the login works correctly. Attach an export domain. See Attaching an Existing Export Domain to a Data Center in the Administration Guide for details. On the proxy host, copy the virtual machine from the RHEL 5 Xen host: Convert the virtual machine to libvirt XML and move the file to your export domain: In the Administration Portal, click Storage Domains , click the export domain's name, and click the VM Import tab in the details view to verify that the virtual machine is in your export domain. Import the virtual machine into the destination data domain. See Section 6.13.3, "Importing a Virtual Machine from the Export Domain" for details. 6.13.8. Importing a Virtual Machine from a KVM Host Import virtual machines from KVM to your Red Hat Virtualization environment. Red Hat Virtualization converts KVM virtual machines to the correct format before they are imported. You must enable public key authentication between the KVM host and at least one host in the destination data center (this host is referred to in the following procedure as the proxy host). Warning The virtual machine must be shut down before being imported. Starting the virtual machine through KVM during the import process can result in data corruption. Important An import operation can only include virtual machines that share the same architecture. If any virtual machine to be imported has a different architecture, a warning will display and you will be prompted to change your selection to include only virtual machines with the same architecture. Note If the import fails, refer to the relevant log file in /var/log/vdsm/import/ and to /var/log/vdsm/vdsm.log on the proxy host for details. Importing a Virtual Machine from KVM Enable public key authentication between the proxy host and the KVM host: Log in to the proxy host and generate SSH keys for the vdsm user. Copy the vdsm user's public key to the KVM host. The proxy host's known_hosts file will also be updated to include the host key of the KVM host. Log in to the KVM host to verify that the login works correctly. Log in to the Administration Portal. Click Compute Virtual Machines . Click More Actions ( ), then click Import to open the Import Virtual Machine(s) window. Select the Data Center that contains the proxy host. Select KVM (via Libvirt) from the Source drop-down list. Optionally, select a KVM provider External Provider from the drop-down list. The URI will be pre-filled with the correct URI. See Adding a KVM Host as a Virtual Machine Provider in the Administration Guide for more information. Enter the URI of the KVM host in the following format: Keep the Requires Authentication check box selected. Enter root in the Username field. Enter the Password of the KVM host's root user. Select the Proxy Host from the drop-down list. Click Load to list the virtual machines on the KVM host that can be imported. Select one or more virtual machines from the Virtual Machines on Source list, and use the arrows to move them to the Virtual Machines to Import list. Click . Select the Cluster in which the virtual machines will reside. Select a CPU Profile for the virtual machines. Optionally, select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines. Optionally, select the Clone check box to change the virtual machine name and MAC addresses, and clone all disks, removing all snapshots. If a virtual machine appears with a warning symbol beside its name or has a tick in the VM in System column, you must clone the virtual machine and change its name. Click on each virtual machine to be imported and click on the Disks sub-tab. Use the Allocation Policy and Storage Domain lists to select whether the disk used by the virtual machine will be thin provisioned or preallocated, and select the storage domain on which the disk will be stored. An icon is also displayed to indicate which of the disks to be imported acts as the boot disk for that virtual machine. See Virtual Disk Storage Allocation Policies in the Technical Reference for more information. Note The target storage domain must be a file-based domain. Due to current limitations, specifying a block-based domain causes the operation to fail. If you selected the Clone check box, change the name of the virtual machine in the General tab. Click OK to import the virtual machines. The CPU type of the virtual machine must be the same as the CPU type of the cluster into which it is being imported. To view the cluster's CPU Type in the Administration Portal: Click Compute Clusters . Select a cluster. Click Edit . Click the General tab. If the CPU type of the virtual machine is different, configure the imported virtual machine's CPU type: Click Compute Virtual Machines . Select the virtual machine. Click Edit . Click the System tab. Click the Advanced Parameters arrow. Specify the Custom CPU Type and click OK . 6.13.9. Importing a Red Hat KVM Guest Image You can import a Red Hat-provided KVM virtual machine image. This image is a virtual machine snapshot with a preconfigured instance of Red Hat Enterprise Linux installed. You can configure this image with the cloud-init tool, and use it to provision new virtual machines. This eliminates the need to install and configure the operating system and provides virtual machines that are ready for use. Importing a Red Hat KVM Guest Image Download the most recent KVM virtual machine image from the Download Red Hat Enterprise Linux list, in the Product Software tab. Upload the virtual machine image using the Manager or the REST API. See Uploading a Disk Image to a Storage Domain in the Administration Guide . Create a new virtual machine and attach the uploaded disk image to it. See Section 2.1, "Creating a Virtual Machine" . Optionally, use cloud-init to configure the virtual machine. See Section 7.8, "Using Cloud-Init to Automate the Configuration of Virtual Machines" for details. Optionally, create a template from the virtual machine. You can generate new virtual machines from this template. See Chapter 7, Templates for information about creating templates and generating virtual machines from templates.
[ "chown 36:36 path_to_OVA_file/file.OVA", "sudo -u vdsm ssh-keygen", "sudo -u vdsm ssh-copy-id root@ xenhost.example.com", "sudo -u vdsm ssh root@ xenhost.example.com", "sudo -u vdsm ssh-keygen", "sudo -u vdsm ssh-copy-id root@ xenhost.example.com", "sudo -u vdsm ssh root@ xenhost.example.com", "virt-v2v-copy-to-local -ic xen+ssh://root@ xenhost.example.com vmname", "virt-v2v -i libvirtxml vmname .xml -o rhev -of raw -os storage.example.com:/exportdomain", "sudo -u vdsm ssh-keygen", "sudo -u vdsm ssh-copy-id root@ kvmhost.example.com", "sudo -u vdsm ssh root@ kvmhost.example.com", "qemu+ssh://root@ kvmhost.example.com /system" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-exporting_and_importing_virtual_machines_and_templates
1.2. Comparing Static to Dynamic IP Addressing
1.2. Comparing Static to Dynamic IP Addressing Static IP addressing When a device is assigned a static IP address, the address does not change over time unless changed manually. It is recommended to use static IP addressing if you want: To ensure network address consistency for servers such as DNS , and authentication servers. To use out-of-band management devices that work independently of other network infrastructure. All the configuration tools listed in Section 3.1, "Selecting Network Configuration Methods" allow assigning static IP addresses manually. The nmcli tool is also suitable, described in Section 3.3.8, "Adding and Configuring a Static Ethernet Connection with nmcli" . For more information on automated configuration and management, see the OpenLMI chapter in the Red Hat Enterprise Linux 7 System Administrators Guide . The Red Hat Enterprise Linux 7 Installation Guide documents the use of a Kickstart file which can also be used for automating the assignment of network settings. Dynamic IP addressing When a device is assigned a dynamic IP address, the address changes over time. For this reason, it is recommended for devices that connect to the network occasionally because IP address might be changed after rebooting the machine. Dynamic IP addresses are more flexible, easier to set up and administer. The dynamic host control protocol ( DHCP ) is a traditional method of dynamically assigning network configurations to hosts. See Section 14.1, "Why Use DHCP?" for more information. You can also use the nmcli tool, described in Section 3.3.7, "Adding and Configuring a Dynamic Ethernet Connection with nmcli" . Note There is no strict rule defining when to use static or dynamic IP address. It depends on user's needs, preferences and the network environment. By default, NetworkManager calls the DHCP client, dhclient .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Comparing_Static_to_Dynamic_IP_Addressing
Chapter 5. Debugging issues
Chapter 5. Debugging issues Central saves information to its container logs. 5.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 5.2. Viewing the logs You can use either the oc or kubectl command to view the logs for the Central pod. Procedure To view the logs for the Central pod by using kubectl , run the following command : USD kubectl logs -n stackrox <central_pod> To view the logs for the Central pod by using oc , run the following command : USD oc logs -n stackrox <central_pod> 5.3. Viewing the current log level You can change the log level to see more or less information in Central logs. Procedure Run the following command to view the current log level: USD roxctl central debug log Additional resources roxctl central debug 5.4. Changing the log level Procedure Run the following command to change the log level: USD roxctl central debug log --level= <log_level> 1 1 The acceptable values for <log_level> are Panic , Fatal , Error , Warn , Info , and Debug . Additional resources roxctl central debug 5.5. Retrieving debugging information Procedure Run the following command to gather the debugging information for investigating issues: USD roxctl central debug dump To generate a diagnostic bundle with the RHACS administrator password or API token and central address, follow the procedure in Generating a diagnostic bundle by using the roxctl CLI . Additional resources roxctl central debug
[ "export ROX_ENDPOINT= <host:port> 1", "kubectl logs -n stackrox <central_pod>", "oc logs -n stackrox <central_pod>", "roxctl central debug log", "roxctl central debug log --level= <log_level> 1", "roxctl central debug dump" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/roxctl_cli/debugging-issues-1
probe::netdev.ioctl
probe::netdev.ioctl Name probe::netdev.ioctl - Called when the device suffers an IOCTL Synopsis Values cmd The IOCTL request arg The IOCTL argument (usually the netdev interface)
[ "netdev.ioctl" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-netdev-ioctl
29.2. The TCPGOSSIP JGroups Protocol
29.2. The TCPGOSSIP JGroups Protocol The TCPGOSSIP discovery protocol uses one or more configured GossipRouter processes to store information about the nodes in the cluster. Important It is vital that the GossipRouter process consistently be available to all nodes in the cluster, as without this process it will not be possible to add additional nodes. For this reason it is strongly recommended to deploy this process in a highly available method; for example, an Availability Set with multiple virtual machines may be used. Running the GossipRouter The GossipRouter is included in the JGroups jar file, and must be running before any nodes are started. This process may be started by pointing to the GossipRouter class in the JGroups jar file included with JBoss Data Grid: In the event that multiple GossipRouters are available, and specified, a node will always register with all specified GossipRouters; however, it will only retrieve information from the first available GossipRouter. If a GossipRouter is unavailable it will be marked as failed and removed from the list, with a background thread started to periodically attempt reconnecting to the failed GossipRouter. Once the thread successfully reconnects the GossipRouter will be reinserted into the list. Configuring JBoss Data Grid to use TCPGOSSIP (Library Mode) In Library Mode the JGroups xml file should be used to configure TCPGOSSIP ; however, there is no TCPGOSSIP configuration included by default. It is recommended to use one of the preexisting files specified in Section 26.2.2, "Pre-Configured JGroups Files" and then adjust the configuration to include TCPGOSSIP . For instance, default-configs/default-jgroups-ec2.xml could be selected and the S3_PING protocol removed, and then the following block added in its place: Configuring JBoss Data Grid to use TCPGOSSIP (Remote Client-Server Mode) In Remote Client-Server Mode a stack may be defined for TCPGOSSIP in the jgroups subsystem of the server's configuration file. The following configuration snippet contains an example of this: Report a bug
[ "java -classpath jgroups-USD{jgroups.version}.jar org.jgroups.stack.GossipRouter -bindaddress IP_ADDRESS -port PORT", "<TCPGOSSIP initial_hosts=\"IP_ADDRESS_0[PORT_0],IP_ADDRESS_1[PORT_1]\" />", "<subsystem xmlns=\"urn:infinispan:server:jgroups:6.1\" default-stack=\"USD{jboss.default.jgroups.stack:tcpgossip}\"> [...] <stack name=\"jdbc_ping\"> <transport type=\"TCP\" socket-binding=\"jgroups-tcp\"/> <protocol type=\"TCPGOSSIP\"> <property name=\"initial_hosts\">IP_ADDRESS_0[PORT_0],IP_ADDRESS_1[PORT_1]</property> </protocol> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\" socket-binding=\"jgroups-tcp-fd\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <protocol type=\"pbcast.NAKACK2\"> <property name=\"use_mcast_xmit\">false</property> </protocol> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG2\"/> </stack> [...] </subsystem>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/the_tcpgossip_jgroups_protocol
Chapter 1. Preparing to install on IBM Cloud VPC
Chapter 1. Preparing to install on IBM Cloud VPC The installation workflows documented in this section are for IBM Cloud VPC infrastructure environments. IBM Cloud Classic is not supported at this time. For more information about the difference between Classic and VPC infrastructures, see the IBM documentation . 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on IBM Cloud VPC Before installing OpenShift Container Platform on IBM Cloud VPC, you must create a service account and configure an IBM Cloud account. See Configuring an IBM Cloud account for details about creating an account, enabling API services, configuring DNS, IBM Cloud account limits, and supported IBM Cloud VPC regions. You must manually manage your cloud credentials when installing a cluster to IBM Cloud VPC. Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 1.3. Choosing a method to install OpenShift Container Platform on IBM Cloud VPC You can install OpenShift Container Platform on IBM Cloud VPC using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Cloud VPC using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Cloud VPC infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Cloud VPC : You can install a customized cluster on IBM Cloud VPC infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Cloud VPC with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on IBM Cloud VPC into an existing VPC : You can install OpenShift Container Platform on an existing IBM Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing Virtual Private Cloud (VPC). You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.4. steps Configuring an IBM Cloud account
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_cloud_vpc/preparing-to-install-on-ibm-cloud
15.5. Additional Resources
15.5. Additional Resources RPM is an extremely complex utility with many options and methods for querying, installing, upgrading, and removing packages. Refer to the following resources to learn more about RPM. 15.5.1. Installed Documentation rpm --help - This command displays a quick reference of RPM parameters. man rpm - The RPM man page gives more detail about RPM parameters than the rpm --help command.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/package_management_with_rpm-additional_resources
Chapter 7. Installing a cluster on IBM Power Virtual Server in a restricted network
Chapter 7. Installing a cluster on IBM Power Virtual Server in a restricted network In OpenShift Container Platform 4.14, you can install a cluster on IBM Cloud(R) in a restricted network by creating an internal mirror of the installation release content on an existing Virtual Private Cloud (VPC) on IBM Cloud(R). Important IBM Power(R) Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in IBM Cloud(R). When installing a cluster in a restricted network, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 7.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). 7.3.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.3.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 7.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.powervs field: vpcName: <existing_vpc> vpcSubnets: <vpcSubnet> For platform.powervs.vpcName , specify the name for the existing IBM Cloud(R). For platform.powervs.vpcSubnets , specify the existing subnets. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 7.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" 12 region: "powervs-region" vpcRegion: "vpc-region" vpcName: name-of-existing-vpc 13 vpcSubnets: 14 - name-of-existing-vpc-subnet zone: "powervs-zone" serviceInstanceID: "service-instance-id" publish: Internal credentialsMode: Manual pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: ssh-ed25519 AAAA... 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 8 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The name of an existing resource group. The existing VPC and subnets should be in this resource group. The cluster is deployed to this resource group. 13 Specify the name of an existing VPC. 14 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000. For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 You can optionally provide the sshKey value that you use to access the machines in your cluster. 17 Provide the contents of the certificate file that you used for your mirror registry. 18 Provide the imageContentSources section from the output of the command to mirror the repository. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.12. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.14. steps Customize your cluster Optional: Opt out of remote health reporting Optional: Registering your disconnected cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "vpcName: <existing_vpc> vpcSubnets: <vpcSubnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" 12 region: \"powervs-region\" vpcRegion: \"vpc-region\" vpcName: name-of-existing-vpc 13 vpcSubnets: 14 - name-of-existing-vpc-subnet zone: \"powervs-zone\" serviceInstanceID: \"service-instance-id\" publish: Internal credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: ssh-ed25519 AAAA... 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power_virtual_server/installing-restricted-networks-ibm-power-vs
Chapter 6. Using credentials and configurations in workspaces
Chapter 6. Using credentials and configurations in workspaces You can use your credentials and configurations in your workspaces. To do so, mount your credentials and configurations to the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance: Mount your credentials and sensitive configurations as Kubernetes Secrets . Mount your non-sensitive configurations as Kubernetes ConfigMaps . If you need to allow the Dev Workspace Pods in the cluster to access container registries that require authentication, create an image pull Secret for the Dev Workspace Pods. The mounting process uses the standard Kubernetes mounting mechanism and requires applying additional labels and annotations to your existing resources. Resources are mounted when starting a new workspace or restarting an existing one. You can create permanent mount points for various components: Maven configuration, such as the user-specific settings.xml file SSH key pairs Git-provider access tokens Git configuration AWS authorization tokens Configuration files Persistent storage Additional resources Kubernetes Documentation: Secrets Kubernetes Documentation: ConfigMaps 6.1. Mounting Secrets To mount confidential data into your workspaces, use Kubernetes Secrets. Using Kubernetes Secrets, you can mount usernames, passwords, SSH key pairs, authentication tokens (for example, for AWS), and sensitive configurations. Mount Kubernetes Secrets to the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . In your user project, you created a new Secret or determined an existing Secret to mount to all Dev Workspace containers. Procedure Add the labels, which are required for mounting the Secret, to the Secret. Optional: Use the annotations to configure how the Secret is mounted. Table 6.1. Optional annotations Annotation Description controller.devfile.io/mount-path: Specifies the mount path. Defaults to /etc/secret/ <Secret_name> . controller.devfile.io/mount-as: Specifies how the resource should be mounted: file , subpath , or env . Defaults to file . mount-as: file mounts the keys and values as files within the mount path. mount-as: subpath mounts the keys and values within the mount path using subpath volume mounts. mount-as: env mounts the keys and values as environment variables in all Dev Workspace containers. Example 6.1. Mounting a Secret as a file apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' annotations: controller.devfile.io/mount-path: '/home/user/.m2' data: settings.xml: <Base64_encoded_content> When you start a workspace, the /home/user/.m2/settings.xml file will be available in the Dev Workspace containers. With Maven, you can set a custom path for the settings.xml file. For example: 6.1.1. Creating image pull Secrets To allow the Dev Workspace Pods in the OpenShift cluster of your organization's OpenShift Dev Spaces instance to access container registries that require authentication, create an image pull Secret. You can create image pull Secrets by using oc or a .dockercfg file or a config.json file. 6.1.1.1. Creating an image pull Secret with oc Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure In your user project, create an image pull Secret with your private container registry details and credentials: Add the following label to the image pull Secret: 6.1.1.2. Creating an image pull Secret from a .dockercfg file If you already store the credentials for the private container registry in a .dockercfg file, you can use that file to create an image pull Secret. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . base64 command line tools are installed in the operating system you are using. Procedure Encode the .dockercfg file to Base64: Create a new OpenShift Secret in your user project: apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockercfg: <Base64_content_of_.dockercfg> type: kubernetes.io/dockercfg Apply the Secret: 6.1.1.3. Creating an image pull Secret from a config.json file If you already store the credentials for the private container registry in a USDHOME/.docker/config.json file, you can use that file to create an image pull Secret. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . base64 command line tools are installed in the operating system you are using. Procedure Encode the USDHOME/.docker/config.json file to Base64. Create a new OpenShift Secret in your user project: apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockerconfigjson: <Base64_content_of_config.json> type: kubernetes.io/dockerconfigjson Apply the Secret: 6.1.2. Using a Git-provider access token OAuth for GitHub, GitLab, Bitbucket, or Microsoft Azure Repos needs to be configured by the administrator of your organization's OpenShift Dev Spaces instance. If your administrator could not configure it for OpenShift Dev Spaces users, the workaround is for you to use a personal access token. You can configure personal access tokens on the User Preferences page of your OpenShift Dev Spaces dashboard: https:// <openshift_dev_spaces_fqdn> /dashboard/#/user-preferences?tab=personal-access-tokens , or apply it manually as a Kubernetes Secret in the namespace. Mounting your access token as a Secret enables the OpenShift Dev Spaces Server to access the remote repository that is cloned during workspace creation, including access to the repository's /.che and /.vscode folders. Apply the Secret in your user project of the OpenShift cluster of your organization's OpenShift Dev Spaces instance. After applying the Secret, you can create workspaces with clones of private Git repositories that are hosted on GitHub, GitLab, Bitbucket Server, or Microsoft Azure Repos. You can create and apply multiple access-token Secrets per Git provider. You must apply each of those Secrets in your user project. Prerequisites You have logged in to the cluster. Tip On OpenShift, you can use the oc command-line tool to log in to the cluster: USD oc login https:// <openshift_dev_spaces_fqdn> --username= <my_user> Procedure Generate your access token on your Git provider's website. Important Personal access tokens are sensitive information and should be kept confidential. Treat them like passwords. If you are having trouble with authentication, ensure you are using the correct token and have the appropriate permissions for cloning repositories: Open a terminal locally on your computer Use the git command to clone the repository using your personal access token. The format of the git command vary based on the Git Provider. As an example, GitHub personal access token verification can be done using the following command: Replace <PAT> with your personal access token, and username/repo with the appropriate repository path. If the token is valid and has the necessary permissions, the cloning process should be successful. Otherwise, this is an indicator of incorrect personal access token, insufficient permissions, or other issues. Important For GitHub Enterprise Cloud, verify that the token is authorized for use within the organization . Go to https:// <openshift_dev_spaces_fqdn> /api/user/id in the web browser to get your OpenShift Dev Spaces user ID. Prepare a new OpenShift Secret. kind: Secret apiVersion: v1 metadata: name: personal-access-token- <your_choice_of_name_for_this_token> labels: app.kubernetes.io/component: scm-personal-access-token app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/che-userid: <devspaces_user_id> 1 che.eclipse.org/scm-personal-access-token-name: <git_provider_name> 2 che.eclipse.org/scm-url: <git_provider_endpoint> 3 che.eclipse.org/scm-organization: <git_provider_organization> 4 stringData: token: <Content_of_access_token> type: Opaque 1 Your OpenShift Dev Spaces user ID. 2 The Git provider name: github or gitlab or bitbucket-server or azure-devops . 3 The Git provider URL. 4 This line is only applicable to azure-devops : your Git provider user organization. Visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . Switch to your OpenShift Dev Spaces user namespace in the cluster. Tip On OpenShift: The oc command-line tool can return the namespace you are currently on in the cluster, which you can use to check your current namespace: USD oc project You can switch to your OpenShift Dev Spaces user namespace on a command line if needed: USD oc project <your_user_namespace> Apply the Secret. Tip On OpenShift, you can use the oc command-line tool: Verification Start a new workspace by using the URL of a remote Git repository that the Git provider hosts. Make some changes and push to the remote Git repository from the workspace. Additional resources Deploying Che with support for Git repositories with self-signed certificates Authorizing a personal access token for use with SAML single sign-on 6.2. Mounting ConfigMaps To mount non-confidential configuration data into your workspaces, use Kubernetes ConfigMaps. Using Kubernetes ConfigMaps, you can mount non-sensitive data such as configuration values for an application. Mount Kubernetes ConfigMaps to the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . In your user project, you created a new ConfigMap or determined an existing ConfigMap to mount to all Dev Workspace containers. Procedure Add the labels, which are required for mounting the ConfigMap, to the ConfigMap. Optional: Use the annotations to configure how the ConfigMap is mounted. Table 6.2. Optional annotations Annotation Description controller.devfile.io/mount-path: Specifies the mount path. Defaults to /etc/config/ <ConfigMap_name> . controller.devfile.io/mount-as: Specifies how the resource should be mounted: file , subpath , or env . Defaults to file . mount-as:file mounts the keys and values as files within the mount path. mount-as:subpath mounts the keys and values within the mount path using subpath volume mounts. mount-as:env mounts the keys and values as environment variables in all Dev Workspace containers. Example 6.2. Mounting a ConfigMap as environment variables kind: ConfigMap apiVersion: v1 metadata: name: my-settings labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: <env_var_1> : <value_1> <env_var_2> : <value_2> When you start a workspace, the <env_var_1> and <env_var_2> environment variables will be available in the Dev Workspace containers. 6.2.1. Mounting Git configuration Note The user.name and user.email fields will be set automatically to the gitconfig content from a git provider, connected to OpenShift Dev Spaces by a Git-provider access token or a token generated via OAuth, if username and email are set on the provider's user profile page. Follow the instructions below to mount a Git config file in a workspace. Prerequisites You have logged in to the cluster. Procedure Prepare a new OpenShift ConfigMap. kind: ConfigMap apiVersion: v1 metadata: name: workspace-userdata-gitconfig-configmap namespace: <user_namespace> 1 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/ data: gitconfig: <gitconfig content> 2 1 A user namespace. Visit https:// <openshift_dev_spaces_fqdn> /api/kubernetes/namespace to get your OpenShift Dev Spaces user namespace as name . 2 The content of your gitconfig file content. Apply the ConfigMap. Verification Start a new workspace by using the URL of a remote Git repository that the Git provider hosts. Once the workspace is started, open a new terminal in the tools container and run git config --get-regexp user.* . Your Git user name and email should appear in the output. 6.3. Enabling artifact repositories in a restricted environment By configuring technology stacks, you can work with artifacts from in-house repositories using self-signed certificates: Maven Gradle npm Python Go NuGet 6.3.1. Maven You can enable a Maven artifact repository in Maven workspaces that run in a restricted environment. Prerequisites You are not running any Maven workspace. You know your user namespace, which is <username> -devspaces where <username> is your OpenShift Dev Spaces username. Procedure In the <username> -devspaces namespace, apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. In the <username> -devspaces namespace, apply the ConfigMap to create the settings.xml file: kind: ConfigMap apiVersion: v1 metadata: name: settings-xml annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: settings.xml: | <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository/> <interactiveMode/> <offline/> <pluginGroups/> <servers/> <mirrors> <mirror> <id>redhat-ga-mirror</id> <name>Red Hat GA</name> <url>https:// <maven_artifact_repository_route> /repository/redhat-ga/</url> <mirrorOf>redhat-ga</mirrorOf> </mirror> <mirror> <id>maven-central-mirror</id> <name>Maven Central</name> <url>https:// <maven_artifact_repository_route> /repository/maven-central/</url> <mirrorOf>maven-central</mirrorOf> </mirror> <mirror> <id>jboss-public-repository-mirror</id> <name>JBoss Public Maven Repository</name> <url>https:// <maven_artifact_repository_route> /repository/jboss-public/</url> <mirrorOf>jboss-public-repository</mirrorOf> </mirror> </mirrors> <proxies/> <profiles/> <activeProfiles/> </settings> Optional: When using JBoss EAP-based devfiles, apply a second settings-xml ConfigMap in the <username> -devspaces namespace, and with the same content, a different name, and the /home/jboss/.m2 mount path. In the <username> -devspaces namespace, apply the ConfigMap for the TrustStore initialization script: Java 8 kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java8-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -trustcacerts -keystore ~/.java/current/jre/lib/security/cacerts -storepass changeit Java 11 kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java11-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit Start a Maven workspace. Open a new terminal in the tools container. Run ~/init-truststore.sh . 6.3.2. Gradle You can enable a Gradle artifact repository in Gradle workspaces that run in a restricted environment. Prerequisites You are not running any Gradle workspace. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap for the TrustStore initialization script: kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit Apply the ConfigMap for the Gradle init script: kind: ConfigMap apiVersion: v1 metadata: name: init-gradle annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.gradle labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init.gradle: | allprojects { repositories { mavenLocal () maven { url "https:// <gradle_artifact_repository_route> /repository/maven-public/" credentials { username "admin" password "passwd" } } } } Start a Gradle workspace. Open a new terminal in the tools container. Run ~/init-truststore.sh . 6.3.3. npm You can enable an npm artifact repository in npm workspaces that run in a restricted environment. Prerequisites You are not running any npm workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /public-certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: nexus.cer: >- <Base64_encoded_content_of_public_cert>__ 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the following environment variables in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: NPM_CONFIG_REGISTRY: >- https:// <npm_artifact_repository_route> /repository/npm-all/ 6.3.3.1. Disabling self-signed certificate validation Run the command below to disable SSL/TLS, bypassing the validation of your self-signed certificates. Note that this is a potential security risk. For a better solution, configure a self-signed certificate you trust with NODE_EXTRA_CA_CERTS . Procedure Run the following command in the terminal: npm config set strict-ssl false 6.3.3.2. Configuring NODE_EXTRA_CA_CERTS to use a certificate Use the command below to set NODE_EXTRA_CA_CERTS to point to where you have your SSL/TLS certificate. Procedure Run the following command in the terminal: `export NODE_EXTRA_CA_CERTS=/public-certs/nexus.cer` 1 `npm install` 1 /public-certs/nexus.cer is the path to self-signed SSL/TLS certificate of Nexus artifactory. 6.3.4. Python You can enable a Python artifact repository in Python workspaces that run in a restricted environment. Prerequisites You are not running any Python workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the following environment variables in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: PIP_INDEX_URL: >- https:// <python_artifact_repository_route> /repository/pypi-all/ PIP_CERT: /home/user/certs/tls.cer 6.3.5. Go You can enable a Go artifact repository in Go workspaces that run in a restricted environment. Prerequisites You are not running any Go workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the following environment variables in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: GOPROXY: >- http:// <athens_proxy_route> SSL_CERT_FILE: /home/user/certs/tls.cer 6.3.6. NuGet You can enable a NuGet artifact repository in NuGet workspaces that run in a restricted environment. Prerequisites You are not running any NuGet workspace. Warning Applying a ConfigMap that sets environment variables might cause a workspace boot loop. If you encounter this behavior, remove the ConfigMap and edit the devfile directly. Procedure Apply the Secret for the TLS certificate: kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1 1 Base64 encoding with disabled line wrapping. Apply the ConfigMap to set the environment variable for the path of the TLS certificate file in the tools container: kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: SSL_CERT_FILE: /home/user/certs/tls.cer Apply the ConfigMap to create the nuget.config file: kind: ConfigMap apiVersion: v1 metadata: name: init-nuget annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /projects labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: nuget.config: | <?xml version="1.0" encoding="UTF-8"?> <configuration> <packageSources> <add key="nexus2" value="https:// <nuget_artifact_repository_route> /repository/nuget-group/"/> </packageSources> <packageSourceCredentials> <nexus2> <add key="Username" value="admin" /> <add key="Password" value="passwd" /> </nexus2> </packageSourceCredentials> </configuration>
[ "oc label secret <Secret_name> controller.devfile.io/mount-to-devworkspace=true controller.devfile.io/watch-secret=true", "apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' annotations: controller.devfile.io/mount-path: '/home/user/.m2' data: settings.xml: <Base64_encoded_content>", "mvn --settings /home/user/.m2/settings.xml clean install", "oc create secret docker-registry <Secret_name> --docker-server= <registry_server> --docker-username= <username> --docker-password= <password> --docker-email= <email_address>", "oc label secret <Secret_name> controller.devfile.io/devworkspace_pullsecret=true controller.devfile.io/watch-secret=true", "cat .dockercfg | base64 | tr -d '\\n'", "apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockercfg: <Base64_content_of_.dockercfg> type: kubernetes.io/dockercfg", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "cat config.json | base64 | tr -d '\\n'", "apiVersion: v1 kind: Secret metadata: name: <Secret_name> labels: controller.devfile.io/devworkspace_pullsecret: 'true' controller.devfile.io/watch-secret: 'true' data: .dockerconfigjson: <Base64_content_of_config.json> type: kubernetes.io/dockerconfigjson", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "git clone https://<PAT>@github.com/username/repo.git", "kind: Secret apiVersion: v1 metadata: name: personal-access-token- <your_choice_of_name_for_this_token> labels: app.kubernetes.io/component: scm-personal-access-token app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/che-userid: <devspaces_user_id> 1 che.eclipse.org/scm-personal-access-token-name: <git_provider_name> 2 che.eclipse.org/scm-url: <git_provider_endpoint> 3 che.eclipse.org/scm-organization: <git_provider_organization> 4 stringData: token: <Content_of_access_token> type: Opaque", "oc apply -f - <<EOF <Secret_prepared_in_step_5> EOF", "oc label configmap <ConfigMap_name> controller.devfile.io/mount-to-devworkspace=true controller.devfile.io/watch-configmap=true", "kind: ConfigMap apiVersion: v1 metadata: name: my-settings labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: <env_var_1> : <value_1> <env_var_2> : <value_2>", "kind: ConfigMap apiVersion: v1 metadata: name: workspace-userdata-gitconfig-configmap namespace: <user_namespace> 1 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/ data: gitconfig: <gitconfig content> 2", "oc apply -f - <<EOF <ConfigMap_prepared_in_step_1> EOF", "kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1", "kind: ConfigMap apiVersion: v1 metadata: name: settings-xml annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: settings.xml: | <settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd\"> <localRepository/> <interactiveMode/> <offline/> <pluginGroups/> <servers/> <mirrors> <mirror> <id>redhat-ga-mirror</id> <name>Red Hat GA</name> <url>https:// <maven_artifact_repository_route> /repository/redhat-ga/</url> <mirrorOf>redhat-ga</mirrorOf> </mirror> <mirror> <id>maven-central-mirror</id> <name>Maven Central</name> <url>https:// <maven_artifact_repository_route> /repository/maven-central/</url> <mirrorOf>maven-central</mirrorOf> </mirror> <mirror> <id>jboss-public-repository-mirror</id> <name>JBoss Public Maven Repository</name> <url>https:// <maven_artifact_repository_route> /repository/jboss-public/</url> <mirrorOf>jboss-public-repository</mirrorOf> </mirror> </mirrors> <proxies/> <profiles/> <activeProfiles/> </settings>", "kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java8-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -trustcacerts -keystore ~/.java/current/jre/lib/security/cacerts -storepass changeit", "kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-java11-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit", "kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1", "kind: ConfigMap apiVersion: v1 metadata: name: init-truststore annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/ labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init-truststore.sh: | #!/usr/bin/env bash keytool -importcert -noprompt -file /home/user/certs/tls.cer -cacerts -storepass changeit", "kind: ConfigMap apiVersion: v1 metadata: name: init-gradle annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.gradle labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: init.gradle: | allprojects { repositories { mavenLocal () maven { url \"https:// <gradle_artifact_repository_route> /repository/maven-public/\" credentials { username \"admin\" password \"passwd\" } } } }", "kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /public-certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: nexus.cer: >- <Base64_encoded_content_of_public_cert>__ 1", "kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: NPM_CONFIG_REGISTRY: >- https:// <npm_artifact_repository_route> /repository/npm-all/", "npm config set strict-ssl false", "`export NODE_EXTRA_CA_CERTS=/public-certs/nexus.cer` 1 `npm install`", "kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1", "kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: PIP_INDEX_URL: >- https:// <python_artifact_repository_route> /repository/pypi-all/ PIP_CERT: /home/user/certs/tls.cer", "kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1", "kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: GOPROXY: >- http:// <athens_proxy_route> SSL_CERT_FILE: /home/user/certs/tls.cer", "kind: Secret apiVersion: v1 metadata: name: tls-cer annotations: controller.devfile.io/mount-path: /home/user/certs controller.devfile.io/mount-as: file labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-secret: 'true' data: tls.cer: >- <Base64_encoded_content_of_public_cert> 1", "kind: ConfigMap apiVersion: v1 metadata: name: disconnected-env annotations: controller.devfile.io/mount-as: env labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: SSL_CERT_FILE: /home/user/certs/tls.cer", "kind: ConfigMap apiVersion: v1 metadata: name: init-nuget annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /projects labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' data: nuget.config: | <?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <packageSources> <add key=\"nexus2\" value=\"https:// <nuget_artifact_repository_route> /repository/nuget-group/\"/> </packageSources> <packageSourceCredentials> <nexus2> <add key=\"Username\" value=\"admin\" /> <add key=\"Password\" value=\"passwd\" /> </nexus2> </packageSourceCredentials> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/user_guide/using-credentials-and-configurations-in-workspaces
Chapter 7. Preparing a PXE installation source
Chapter 7. Preparing a PXE installation source You must configure TFTP and DHCP on a PXE server to enable PXE boot and network installation. 7.1. Network install overview A network installation allows you to install Red Hat Enterprise Linux to a system that has access to an installation server. At a minimum, two systems are required for a network installation: Server A system running a DHCP server, an HTTP, HTTPS, FTP, or NFS server, and in the PXE boot case, a TFTP server. Although each server can run on a different physical system, the procedures in this section assume a single system is running all servers. Client The system to which you are installing Red Hat Enterprise Linux. Once installation starts, the client queries the DHCP server, receives the boot files from the HTTP or TFTP server, and downloads the installation image from the HTTP, HTTPS, FTP or NFS server. Unlike other installation methods, the client does not require any physical boot media for the installation to start. To boot a client from the network, enable network boot in the firmware or in a quick boot menu on the client. On some hardware, the option to boot from a network might be disabled, or not available. The workflow steps to prepare to install Red Hat Enterprise Linux from a network using HTTP or PXE are as follows: Procedure Export the installation ISO image or the installation tree to an NFS, HTTPS, HTTP, or FTP server. Configure the HTTP or TFTP server and DHCP server, and start the HTTP or TFTP service on the server. Boot the client and start the installation. You can choose between the following network boot protocols: HTTP Red Hat recommends using HTTP boot if your client UEFI supports it. HTTP boot is usually more reliable. PXE (TFTP) PXE boot is more widely supported by client systems, but sending the boot files over this protocol might be slow and result in timeout failures. Additional resources Red Hat Satellite product documentation 7.2. Configuring the DHCPv4 server for network boot Enable the DHCP version 4 (DHCPv4) service on your server, so that it can provide network boot functionality. Prerequisites You are preparing network installation over the IPv4 protocol. For IPv6, see Configuring the DHCPv6 server for network boot instead. Find the network addresses of the server. In the following examples, the server has a network card with this configuration: IPv4 address 192.168.124.2/24 IPv4 gateway 192.168.124.1 Procedure Install the DHCP server: Set up a DHCPv4 server. Enter the following configuration in the /etc/dhcp/dhcpd.conf file. Replace the addresses to match your network card. Start the DHCPv4 service: 7.3. Configuring the DHCPv6 server for network boot Enable the DHCP version 6 (DHCPv4) service on your server, so that it can provide network boot functionality. Prerequisites You are preparing network installation over the IPv6 protocol. For IPv4, see Configuring the DHCPv4 server for network boot instead. Find the network addresses of the server. In the following examples, the server has a network card with this configuration: IPv6 address fd33:eb1b:9b36::2/64 IPv6 gateway fd33:eb1b:9b36::1 Procedure Install the DHCP server: Set up a DHCPv6 server. Enter the following configuration in the /etc/dhcp/dhcpd6.conf file. Replace the addresses to match your network card. Start the DHCPv6 service: If DHCPv6 packets are dropped by the RP filter in the firewall, check its log. If the log contains the rpfilter_DROP entry, disable the filter using the following configuration in the /etc/firewalld/firewalld.conf file: 7.4. Configuring a TFTP server for BIOS-based clients You must configure a TFTP server and DHCP server and start the TFTP service on the PXE server for BIOS-based AMD and Intel 64-bit systems. Procedure As root, install the following package. Allow incoming connections to the tftp service in the firewall: This command enables temporary access until the server reboot. optional: To enable permanent access, add the --permanent option to the command. Depending on the location of the installation ISO file, you might have to allow incoming connections for HTTP or other services. Access the pxelinux.0 file from the SYSLINUX package in the DVD ISO image file, where my_local_directory is the name of the directory that you create: Extract the package: Create a pxelinux/ directory in tftpboot/ and copy all the files from the directory into the pxelinux/ directory: Create the directory pxelinux.cfg/ in the pxelinux/ directory: Create a configuration file named default and add it to the pxelinux.cfg/ directory as shown in the following example: The installation program cannot boot without its runtime image. Use the inst.stage2 boot option to specify location of the image. Alternatively, you can use the inst.repo= option to specify the image as well as the installation source. The installation source location used with inst.repo must contain a valid .treeinfo file. When you select the RHEL9 installation DVD as the installation source, the .treeinfo file points to the BaseOS and the AppStream repositories. You can use a single inst.repo option to load both repositories. Create a subdirectory to store the boot image files in the /var/lib/tftpboot/ directory, and copy the boot image files to the directory. In this example, the directory is /var/lib/tftpboot/pxelinux/images/RHEL-9/ : Start and enable the tftp.socket service: The PXE boot server is now ready to serve PXE clients. You can start the client, which is the system to which you are installing Red Hat Enterprise Linux, select PXE Boot when prompted to specify a boot source, and start the network installation. 7.5. Configuring a TFTP server for UEFI-based clients You must configure a TFTP server and DHCP server and start the TFTP service on the PXE server for UEFI-based AMD64, Intel 64, and 64-bit ARM systems. Important Red Hat Enterprise Linux 9 UEFI PXE boot supports a lowercase file format for a MAC-based GRUB menu file. For example, the MAC address file format for GRUB is grub.cfg-01-aa-bb-cc-dd-ee-ff Procedure As root, install the following package. Allow incoming connections to the tftp service in the firewall: This command enables temporary access until the server reboot. Optional: To enable permanent access, add the --permanent option to the command. Depending on the location of the installation ISO file, you might have to allow incoming connections for HTTP or other services. Access the EFI boot image files from the DVD ISO image: Copy the EFI boot images from the DVD ISO image: Fix the permissions of the copied files: Replace the content of /var/lib/tftpboot/redhat/efi/boot/grub.cfg with the following example: The installation program cannot boot without its runtime image. Use the inst.stage2 boot option to specify location of the image. Alternatively, you can use the inst.repo= option to specify the image as well as the installation source. The installation source location used with inst.repo must contain a valid .treeinfo file. When you select the RHEL9 installation DVD as the installation source, the .treeinfo file points to the BaseOS and the AppStream repositories. You can use a single inst.repo option to load both repositories. Create a subdirectory to store the boot image files in the /var/lib/tftpboot/ directory, and copy the boot image files to the directory. In this example, the directory is /var/lib/tftpboot/images/RHEL-9/ : Start and enable the tftp.socket service: The PXE boot server is now ready to serve PXE clients. You can start the client, which is the system to which you are installing Red Hat Enterprise Linux, select PXE Boot when prompted to specify a boot source, and start the network installation. Additional resources Using the Shim Program 7.6. Configuring a network server for IBM Power systems You can configure a network boot server for IBM Power systems by using GRUB. Procedure As root, install the following packages: Allow incoming connections to the tftp service in the firewall: This command enables temporary access until the server reboot. Optional: To enable permanent access, add the --permanent option to the command. Depending on the location of the installation ISO file, you might have to allow incoming connections for HTTP or other services. Create a GRUB network boot directory inside the TFTP root: The command output informs you of the file name that needs to be configured in your DHCP configuration, described in this procedure. If the PXE server runs on an x86 machine, the grub2-ppc64le-modules must be installed before creating a GRUB2 network boot directory inside the tftp root: Create a GRUB configuration file: /var/lib/tftpboot/boot/grub2/grub.cfg as shown in the following example: The installation program cannot boot without its runtime image. Use the inst.stage2 boot option to specify location of the image. Alternatively, you can use the inst.repo= option to specify the image as well as the installation source. The installation source location used with inst.repo must contain a valid .treeinfo file. When you select the RHEL8 installation DVD as the installation source, the .treeinfo file points to the BaseOS and the AppStream repositories. You can use a single inst.repo option to load both repositories. Mount the DVD ISO image using the command: Create a directory and copy the initrd.img and vmlinuz files from DVD ISO image into it, for example: Configure your DHCP server to use the boot images packaged with GRUB2 as shown in the following example. If you already have a DHCP server configured, then perform this step on the DHCP server. Adjust the sample parameters subnet , netmask , routers , fixed-address and hardware ethernet to fit your network configuration. The file name parameter; this is the file name that was outputted by the grub2-mknetdir command earlier in this procedure. On the DHCP server, start and enable the dhcpd service. If you have configured a DHCP server on the localhost, then start and enable the dhcpd service on the localhost. Start and enable the tftp.socket service: The PXE boot server is now ready to serve PXE clients. You can start the client, which is the system to which you are installing Red Hat Enterprise Linux, select PXE Boot when prompted to specify a boot source, and start the network installation.
[ "dnf install dhcp-server", "option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }", "systemctl enable --now dhcpd", "dnf install dhcp-server", "option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }", "systemctl enable --now dhcpd6", "IPv6_rpfilter=no", "dnf install tftp-server", "firewall-cmd --add-service=tftp", "mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro", "cp -pr /mount_point/AppStream/Packages/syslinux-tftpboot-version-architecture.rpm /my_local_directory", "umount /mount_point", "rpm2cpio syslinux-tftpboot-version-architecture.rpm | cpio -dimv", "mkdir /var/lib/tftpboot/pxelinux", "cp /my_local_directory/tftpboot/* /var/lib/tftpboot/pxelinux", "mkdir /var/lib/tftpboot/pxelinux/pxelinux.cfg", "default vesamenu.c32 prompt 1 timeout 600 display boot.msg label linux menu label ^Install system menu default kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ label vesa menu label Install system with ^basic video driver kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img ip=dhcp inst.xdriver=vesa nomodeset inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ label rescue menu label ^Rescue installed system kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img inst.rescue inst.repo=http:///192.168.124.2/RHEL-8/x86_64/iso-contents-root/ label local menu label Boot from ^local drive localboot 0xffff", "mkdir -p /var/lib/tftpboot/pxelinux/images/RHEL-9/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/images/RHEL-9/", "systemctl enable --now tftp.socket", "dnf install tftp-server", "firewall-cmd --add-service=tftp", "mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro", "mkdir /var/lib/tftpboot/redhat cp -r /mount_point/EFI /var/lib/tftpboot/redhat/ umount /mount_point", "chmod -R 755 /var/lib/tftpboot/redhat/", "set timeout=60 menuentry 'RHEL 9' { linux images/RHEL-9/vmlinuz ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ initrd images/RHEL-9/initrd.img }", "mkdir -p /var/lib/tftpboot/images/RHEL-9/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img}/var/lib/tftpboot/images/RHEL-9/", "systemctl enable --now tftp.socket", "dnf install tftp-server dhcp-server", "firewall-cmd --add-service=tftp", "grub2-mknetdir --net-directory=/var/lib/tftpboot Netboot directory for powerpc-ieee1275 created. Configure your DHCP server to point to /boot/grub2/powerpc-ieee1275/core.elf", "dnf install grub2-ppc64le-modules", "set default=0 set timeout=5 echo -e \"\\nWelcome to the Red Hat Enterprise Linux 9 installer!\\n\\n\" menuentry 'Red Hat Enterprise Linux 9' { linux grub2-ppc64/vmlinuz ro ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ initrd grub2-ppc64/initrd.img }", "mount -t iso9660 /path_to_image/name_of_iso/ /mount_point -o loop,ro", "cp /mount_point/ppc/ppc64/{initrd.img,vmlinuz} /var/lib/tftpboot/grub2-ppc64/", "subnet 192.168.0.1 netmask 255.255.255.0 { allow bootp; option routers 192.168.0.5; group { #BOOTP POWER clients filename \"boot/grub2/powerpc-ieee1275/core.elf\"; host client1 { hardware ethernet 01:23:45:67:89:ab; fixed-address 192.168.0.112; } } }", "systemctl enable --now dhcpd", "systemctl enable --now tftp.socket" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/preparing-for-a-network-install_rhel-installer
probe::workqueue.create
probe::workqueue.create Name probe::workqueue.create - Creating a new workqueue Synopsis workqueue.create Values wq_thread task_struct of the workqueue thread cpu cpu for which the worker thread is created
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-workqueue-create
Chapter 1. Executive summary
Chapter 1. Executive summary Many hardware vendors now offer both Ceph-optimized servers and rack-level solutions designed for distinct workload profiles. To simplify the hardware selection process and reduce risk for organizations, Red Hat has worked with multiple storage server vendors to test and evaluate specific cluster options for different cluster sizes and workload profiles. Red Hat's exacting methodology combines performance testing with proven guidance for a broad range of cluster capabilities and sizes. With appropriate storage servers and rack-level solutions, Red Hat Ceph Storage can provide storage pools serving a variety of workloads-from throughput-sensitive and cost and capacity-focused workloads to emerging IOPS-intensive workloads. Red Hat Ceph Storage significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments. Red Hat Ceph Storage offers mature interfaces for enterprise block and object storage, making it an optimal solution for active archive, rich media, and cloud infrastructure workloads characterized by tenant-agnostic OpenStack(R) environments [1] . Delivered as a unified, software-defined, scale-out storage platform, Red Hat Ceph Storage lets businesses focus on improving application innovation and availability by offering capabilities such as: Scaling to hundreds of petabytes [2] . No single point of failure in the cluster. Lower capital expenses (CapEx) by running on commodity server hardware. Lower operational expenses (OpEx) with self-managing and self-healing properties. Red Hat Ceph Storage can run on myriad industry-standard hardware configurations to satisfy diverse needs. To simplify and accelerate the cluster design process, Red Hat conducts extensive performance and suitability testing with participating hardware vendors. This testing allows evaluation of selected hardware under load and generates essential performance and sizing data for diverse workloads-ultimately simplifying Ceph storage cluster hardware selection. As discussed in this guide, multiple hardware vendors now provide server and rack-level solutions optimized for Red Hat Ceph Storage deployments with IOPS-, throughput-, and cost and capacity-optimized solutions as available options. Software-defined storage presents many advantages to organizations seeking scale-out solutions to meet demanding applications and escalating storage needs. With a proven methodology and extensive testing performed with multiple vendors, Red Hat simplifies the process of selecting hardware to meet the demands of any environment. Importantly, the guidelines and example systems listed in this document are not a substitute for quantifying the impact of production workloads on sample systems. [1] Ceph is and has been the leading storage for OpenStack according to several semi-annual OpenStack user surveys. [2] See Yahoo Cloud Object Store - Object Storage at Exabyte Scale for details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/hardware_guide/executive-summary_hw
Chapter 16. Installing and configuring RH-SSO
Chapter 16. Installing and configuring RH-SSO A realm is a security policy domain defined for a web or application server. Security realms are used to restrict access for different application resources. You should create a new realm whether your RH-SSO instance is private or shared with other products. You can keep the master realm as a place for super administrators to create and manage the realms in your system. If you are integrating with an RH-SSO instance that is shared with other product installations to achieve single sign-on with those applications, all of those applications must use the same realm. To create an RH-SSO realm, download, install, and configure RH-SSO 7.5. Note If Business Central and KIE Server are installed on different servers, complete this procedure on both servers. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required) and then select the product and version from the drop-down options: Product: Red Hat Single Sign-On Version: 7.5 Download Red Hat Single Sign-On 7.5.0 Server ( rh-sso-7.5.0.zip ) and the latest server patch. To install and configure a basic RH-SSO standalone server, follow the instructions in the Red Hat Single Sign On Getting Started Guide . For advanced settings for production environments, see the Red Hat Single Sign On Server Administration Guide . Note If you want to run both RH-SSO and Red Hat Process Automation Manager servers on the same system, ensure that you avoid port conflicts by taking one of the following actions: Update the RHSSO_HOME /standalone/configuration/standalone-full.xml file and set the port offset to 100. For example: Use an environment variable to set the port offset when running the server:
[ "<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:100}\">", "bin/standalone.sh -Djboss.socket.binding.port-offset=100" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/sso-realm-proc
Chapter 1. Supported platforms
Chapter 1. Supported platforms This section describes the availability and the supported installation methods of CodeReady Workspaces 2.15 on OpenShift Container Platform 4.10 4.8, 3.11, and OpenShift Dedicated. Table 1.1. Supported deployment environments for CodeReady Workspaces 2.15 on OpenShift Container Platform and OpenShift Dedicated Platform Architecture Deployment method OpenShift Container Platform 3.11 AMD64 and Intel 64 (x86_64) crwctl OpenShift Container Platform 4.8 to 4.10 AMD64 and Intel 64 (x86_64) OperatorHub, crwctl OpenShift Container Platform 4.8 to 4.10 IBM Z (s390x) OperatorHub, crwctl OpenShift Container Platform 4.8 to 4.10 IBM Power (ppc64le) OperatorHub, crwctl OpenShift Dedicated 4.10 AMD64 and Intel 64 (x86_64) Add-on service Red Hat OpenShift Service on AWS (ROSA) AMD64 and Intel 64 (x86_64) Add-on service
null
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/installation_guide/supported-platforms_crw
6.6. Listing Fence Devices and Fence Device Options
6.6. Listing Fence Devices and Fence Device Options You can use the ccs command to print a list of available fence devices and to print a list of options for each available fence type. You can also use the ccs command to print a list of fence devices currently configured for your cluster. To print a list of fence devices currently available for your cluster, execute the following command: For example, the following command lists the fence devices available on the cluster node node1 , showing sample output. To print a list of the options you can specify for a particular fence type, execute the following command: For example, the following command lists the fence options for the fence_wti fence agent. To print a list of fence devices currently configured for your cluster, execute the following command:
[ "ccs -h host --lsfenceopts", "ccs -h node1 --lsfenceopts fence_rps10 - RPS10 Serial Switch fence_vixel - No description available fence_egenera - No description available fence_xcat - No description available fence_na - Node Assassin fence_apc - Fence agent for APC over telnet/ssh fence_apc_snmp - Fence agent for APC over SNMP fence_bladecenter - Fence agent for IBM BladeCenter fence_bladecenter_snmp - Fence agent for IBM BladeCenter over SNMP fence_cisco_mds - Fence agent for Cisco MDS fence_cisco_ucs - Fence agent for Cisco UCS fence_drac5 - Fence agent for Dell DRAC CMC/5 fence_eps - Fence agent for ePowerSwitch fence_ibmblade - Fence agent for IBM BladeCenter over SNMP fence_ifmib - Fence agent for IF MIB fence_ilo - Fence agent for HP iLO fence_ilo_mp - Fence agent for HP iLO MP fence_intelmodular - Fence agent for Intel Modular fence_ipmilan - Fence agent for IPMI over LAN fence_kdump - Fence agent for use with kdump fence_rhevm - Fence agent for RHEV-M REST API fence_rsa - Fence agent for IBM RSA fence_sanbox2 - Fence agent for QLogic SANBox2 FC switches fence_scsi - fence agent for SCSI-3 persistent reservations fence_virsh - Fence agent for virsh fence_virt - Fence agent for virtual machines fence_vmware - Fence agent for VMware fence_vmware_soap - Fence agent for VMware over SOAP API fence_wti - Fence agent for WTI fence_xvm - Fence agent for virtual machines", "ccs -h host --lsfenceopts fence_type", "ccs -h node1 --lsfenceopts fence_wti fence_wti - Fence agent for WTI Required Options: Optional Options: option: No description available action: Fencing Action ipaddr: IP Address or Hostname login: Login Name passwd: Login password or passphrase passwd_script: Script to retrieve password cmd_prompt: Force command prompt secure: SSH connection identity_file: Identity file for ssh port: Physical plug number or name of virtual machine inet4_only: Forces agent to use IPv4 addresses only inet6_only: Forces agent to use IPv6 addresses only ipport: TCP port to use for connection with device verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit separator: Separator for CSV created by operation list power_timeout: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuing command login_timeout: Wait X seconds for cmd prompt after login power_wait: Wait X seconds after issuing ON/OFF delay: Wait X seconds before fencing is started retry_on: Count of attempts to retry power on", "ccs -h host --lsfencedev" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-list-fence-devices-ccs-ca
Chapter 3. Deprecated functionalities
Chapter 3. Deprecated functionalities This section lists deprecated functionalities in Red Hat Developer Hub 1.4. 3.1. ./dynamic-plugins/dist/janus-idp-backstage-plugin-aap-backend-dynamic plugin is deprecated The ./dynamic-plugins/dist/janus-idp-backstage-plugin-aap-backend-dynamic plugin has been deprecated and will be removed in the release. You can use Ansible plug-ins for Red Hat Developer Hub instead. Additional resources RHIDP-3545 3.2. Audit log rotation is deprecated With this update, you can evaluate your platform's log forwarding solutions to align with your security and compliance needs. Most of these solutions offer configurable options to minimize the loss of logs in the event of an outage. Additional resources RHIDP-4913 3.3. Red Hat Single-Sign On 7.6 is deprecated as an authentication provider Red Hat Single-Sign On (RHSSO) 7.6 is deprecated as an authentication provider. You can continue to use RHSSO until the end of maintenance support. For details, see RHSSO lifecycle dates . As an alternative, migrate to Red Hat Build of Keycloak v24 . Additional resources RHIDP-5218
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/release_notes/deprecated-functionalities
2.10. numad
2.10. numad numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system in order to dynamically improve NUMA resource allocation and management (and therefore system performance). Depending on system workload, numad can provide up to 50 percent improvements in performance benchmarks. It also provides a pre-placement advice service that can be queried by various job management systems to provide assistance with the initial binding of CPU and memory resources for their processes. numad monitors available system resources on a per-node basis by periodically accessing information in the /proc file system. It tries to maintain a specified resource usage level, and rebalances resource allocation when necessary by moving processes between NUMA nodes. numad attempts to achieve optimal NUMA performance by localizing and isolating significant processes on a subset of the system's NUMA nodes. numad primarily benefits systems with long-running processes that consume significant amounts of resources, and are contained in a subset of the total system resources. It may also benefit applications that consume multiple NUMA nodes' worth of resources; however, the benefits provided by numad decrease as the consumed percentage of system resources increases. numad is unlikely to improve performance when processes run for only a few minutes, or do not consume many resources. Systems with continuous, unpredictable memory access patterns, such as large in-memory databases, are also unlikely to benefit from using numad. For further information about using numad, see Section 6.3.5, "Automatic NUMA Affinity Management with numad" or Section A.13, "numad" , or refer to the man page:
[ "man numad" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-numad
Images
Images OpenShift Container Platform 4.12 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/images/index
Chapter 2. Accessing the Multicloud Object Gateway with your applications
Chapter 2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint . Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Accessing the Multicloud Object Gateway from the terminal Accessing the Multicloud Object Gateway from the MCG command-line interface For example: Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG buckets using the virtual-hosted style. 2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint Note The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour. 2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You have the relevant endpoint, access key, and secret access key in order to connect to your applications. For example: If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: 2.3. Support of Multicloud Object Gateway data bucket APIs The following table lists the Multicloud Object Gateway (MCG) data bucket APIs and their support levels. Data buckets Support List buckets Supported Delete bucket Supported Replication configuration is part of MCG bucket class configuration Create bucket Supported A different set of canned ACLs Post bucket Not supported Put bucket Partially supported Replication configuration is part of MCG bucket class configuration Bucket lifecycle Partially supported Object expiration only Policy (Buckets, Objects) Partially supported Bucket policies are supported Bucket Website Supported Bucket ACLs (Get, Put) Supported A different set of canned ACLs Bucket Location Partialy Returns a default value only Bucket Notification Not supported Bucket Object Versions Supoorted Get Bucket Info (HEAD) Supported Bucket Request Payment Partially supported Returns the bucket owner Put Object Supported Delete Object Supported Get Object Supported Object ACLs (Get, Put) Supported Get Object Info (HEAD) Supported POST Object Supported Copy Object Supported Multipart Uploads Supported Object Tagging Supported Storage Class Not supported Note No support for cors, metrics, inventory, analytics, inventory, logging, notifications, accelerate, replication, request payment, locks verbs
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "oc describe noobaa -n openshift-storage", "Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa status -n openshift-storage", "INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.", "AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls" ]
https://docs.redhat.com/en/documentation//red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/accessing-the-multicloud-object-gateway-with-your-applications_rhodf
16.5.2. Configuring a DHCPv6 Client
16.5.2. Configuring a DHCPv6 Client The default configuration of the DHCPv6 client works fine in the most cases. However, to configure a DHCP client manually, create and modify the /etc/dhcp/dhclient.conf file. See the /usr/share/doc/dhclient-4.1.1/dhclient6.conf.sample for a client configuration file example. For advanced configurations of DHCPv6 client options such as protocol timing, lease requirements and requests, dynamic DNS support, aliases, as well as a wide variety of values to override, prepend, or append to client-side configurations, see the dhclient.conf(5) man page and the STANDARD DHCPV6 OPTIONS section in the dhcpd-options(5) man page. Important In Red Hat Enterprise Linux 6, a DHCPv6 client is correctly handled only by NetworkManager and should not generally be run separately. That is because DHCPv6, unlike DHCPv4, is not a standalone network configuration protocol but is always supposed to be used together with router discovery.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-dhcpv6-client
7.103. libdrm
7.103. libdrm 7.103.1. RHBA-2015:1301 - libdrm, mesa, xorg-x11-drv-ati, and xorg-x11-drv-intel update Updated libdrm, mesa, xorg-x11-drv-ati, and xorg-x11-drv-intel packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libdrm packages comprise a runtime library for the Direct Rendering Manager. Mesa provides a 3D graphics API that is compatible with Open Graphics Library (OpenGL) and hardware-accelerated drivers for many popular graphics chips. The xorg-x11-drv-ati packages include a driver for ATI graphics cards for the X.Org implementation of the X Window System. The xorg-x11-drv-intel packages contain an Intel integrated graphics video driver for the X.Org implementation of the X Window System. Note The libdrm packages have been upgraded to upstream version 2.4.59, which provides a number of bug fixes and enhancements over the version. (BZ# 1186821 ) * The mesa packages have been upgraded to upstream version 10.4.3, which provides a number of bug fixes and enhancements over the version. Among other changes, this version includes support for new Intel 3D graphic chip sets. (BZ# 1032663 ) * Support for new Intel 3D graphic chip sets has been backported to the xorg-x11-drv-intel packages. * The xorg-x11-drv-ati packages have been upgraded to upstream version 7.5.99, which contains a number of bug fixes and enhancements over the version. Among other changes, this version includes support for new AMD 3D graphic chip sets. (BZ# 1176666 ) Bug Fixes BZ# 1186821 The libdrm packages have been upgraded to upstream version 2.4.59, which provides a number of bug fixes and enhancements over the version. BZ# 1032663 The mesa packages have been upgraded to upstream version 10.4.3, which provides a number of bug fixes and enhancements over the version. Among other changes, this version includes support for new Intel 3D graphic chip sets. BZ# 1176666 Support for new Intel 3D graphic chip sets has been backported to the xorg-x11-drv-intel packages. * The xorg-x11-drv-ati packages have been upgraded to upstream version 7.5.99, which contains a number of bug fixes and enhancements over the version. Among other changes, this version includes support for new AMD 3D graphic chip sets. BZ# 1084104 Previously, the radeon driver did not work correctly with the Virtual Network Computing (VNC) module if hardware acceleration was enabled. Consequently, a VNC client connected to a computer set up this way only displayed a blank screen. With this update, this problem has been resolved, and it is now possible to use VNC with the aforementioned setup. Users of libdrm, mesa, xorg-x11-drv-ati, and xorg-x11-drv-intel are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-libdrm
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_in_external_mode/providing-feedback-on-red-hat-documentation_rhodf
Chapter 2. Installing a cluster on IBM Power
Chapter 2. Installing a cluster on IBM Power In OpenShift Container Platform version 4.13, you can install a cluster on IBM Power infrastructure that you provision. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.13 on the following IBM hardware: IBM Power9 or Power10 processor-based systems Note Support for RHCOS functionality for all IBM Power8 models, IBM Power AC922, IBM Power IC922, and IBM Power LC922 is deprecated in OpenShift Container Platform 4.13. Red Hat recommends that you use later hardware models. Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power9 or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Recommended IBM Power system requirements Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power9 or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 2.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 2.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 2.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 2.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 2.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 2.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 2.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 2.9.2. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.4. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.12. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.13. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.14. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.15. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 2.16. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.17. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 2.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program (without an architecture postfix) runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 2.12.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.12.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none 2.12.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform 4.9 or later, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.16.1.1. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.19. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_power/installing-ibm-power
Appendix A. Technical Users Provided and Required by Satellite
Appendix A. Technical Users Provided and Required by Satellite During the installation of Satellite, system accounts are created. They are used to manage files and process ownership of the components integrated into Satellite. Some of these accounts have fixed UIDs and GIDs, while others take the available UID and GID on the system instead. To control the UIDs and GIDs assigned to accounts, you can define accounts before installing Satellite. Because some of the accounts have hard-coded UIDs and GIDs, it is not possible to do this with all accounts created during Satellite installation. The following table lists all the accounts created by Satellite during installation. You can predefine accounts that have Yes in the Flexible UID and GID column with custom UID and GID before installing Satellite. Do not change the home and shell directories of system accounts because they are requirements for Satellite to work correctly. Because of potential conflicts with local users that Satellite creates, you cannot use external identity providers for the system users of the Satellite base operating system. Table A.1. Technical Users Provided and Required by Satellite User name UID Group name GID Flexible UID and GID Home Shell foreman N/A foreman N/A yes /usr/share/foreman /sbin/nologin foreman-proxy N/A foreman-proxy N/A yes /usr/share/foreman-proxy /sbin/nologin puppet N/A puppet N/A yes /opt/puppetlabs/server/data/puppetserver /sbin/nologin qdrouterd N/A qdrouterd N/A yes N/A /sbin/nologin qpidd N/A qpidd N/A yes /var/lib/qpidd /sbin/nologin unbound N/A unbound N/A yes /etc/unbound /sbin/nologin postgres 26 postgres 26 no /var/lib/pgsql /bin/bash apache 48 apache 48 no /usr/share/httpd /sbin/nologin tomcat 53 tomcat 53 no /usr/share/tomcat /bin/nologin saslauth N/A saslauth 76 no N/A /sbin/nologin
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/chap-documentation-architecture_guide-required_technical_users
Postinstallation configuration
Postinstallation configuration OpenShift Container Platform 4.18 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/postinstallation_configuration/index
Chapter 17. Integrating with email
Chapter 17. Integrating with email With Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can configure your existing email provider to send notifications about policy violations. If you are using Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), you can use your existing email provider or the built-in email notifier to send email notifications. You can use the Default recipient field to forward alerts from RHACS and the RHACS Cloud Service to an email address. Otherwise, you can use annotations to define an audience and notify them about policy violations associated with a specific deployment or namespace. 17.1. Integrating with email on RHACS You can use email as a notification method by forwarding alerts from RHACS. 17.1.1. Configuring the email plugin The RHACS notifier can send email to a recipient specified in the integration, or it can use annotations to determine the recipient. Important If you are using RHACS Cloud Service, it blocks port 25 by default. Configure your mail server to use port 587 or 465 to send email notifications. Procedure Go to Platform Configuration Integrations . Under the Notifier Integrations section, select Email . Select New Integration . In the Integration name field, enter a name for your email integration. In the Email server field, enter the address of your email server. The email server address includes fully qualified domain name (FQDN) and the port number; for example, smtp.example.com:465 . Optional: If you are using unauthenticated SMTP, select Enable unauthenticated SMTP . This is insecure and not recommended, but might be required for some integrations. For example, you might need to enable this option if you use an internal server for notifications that does not require authentication. Note You cannot change an existing email integration that uses authentication to enable unauthenticated SMTP. You must delete the existing integration and create a new one with Enable unauthenticated SMTP selected. Enter the user name and password of a service account that is used for authentication. Optional: Enter the name that you want to appear in the FROM header of email notifications in the From field; for example, Security Alerts . Specify the email address that you want to appear in the SENDER header of email notifications in the Sender field. Specify the email address that will receive the notifications in the Default recipient field. Optional: Enter an annotation key in Annotation key for recipient . You can use annotations to dynamically determine an email recipient. To do this: Add an annotation similar to the following example in your namespace or deployment YAML file, where email is the Annotation key that you specify in your email integration. You can create an annotation for the deployment or the namespace. Use the annotation key email in the Annotation key for recipient field. If you configured the deployment or namespace with an annotation, the RHACS sends the alert to the email specified in the annotation. Otherwise, it sends the alert to the default recipient. Note The following rules govern how RHACS determines the recipient of an email notification: If the deployment has an annotation key, the annotation's value overrides the default value. If the namespace has an annotation key, the namespace's value overrides the default value. If a deployment has an annotation key and a defined audience, RHACS sends an email to the audience specified in the key. If a deployment does not have an annotation key, RHACS checks the namespace for an annotation key and sends an email to the specified audience. If no annotation keys exist, RHACS sends an email to the default recipient. Optional: Select Disable TLS certificate validation (insecure) to send email without TLS. You should not disable TLS unless you are using StartTLS. Note Use TLS for email notifications. Without TLS, all email is sent unencrypted. Optional: To use StartTLS, select either Login or Plain from the Use STARTTLS (requires TLS to be disabled) drop-down menu. Important With StartTLS, credentials are passed in plain text to the email server before the session encryption is established. StartTLS with the Login parameter sends authentication credentials in a base64 encoded string. StartTLS with the Plain parameter sends authentication credentials to your mail relay in plain text. Additional resources Configuring delivery destinations and scheduling 17.1.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the Email notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. 17.2. Integrating with email on RHACS Cloud Service You can use your existing email provider or the built-in email notifier in RHACS Cloud Service to send email alerts about policy violations. To use your own email provider, you must configure the email provider as described in the section Configuring the email plugin . To use the built-in email notifier, you must configure the RHACS Cloud Service email plugin. 17.2.1. Configuring the RHACS Cloud Service email plugin The RHACS Cloud Service notifier sends an email to a recipient. You can specify the recipient in the integration, or RHACS Cloud Service can use annotation keys to find the recipient. Important You can only send 250 emails per 24-hour rolling period. If you exceed this limit, RHACS Cloud Service sends emails only after the 24-hour period ends. Because of rate limits, Red Hat recommends using email notifications only for critical alerts or vulnerability reports. Procedure Go to Platform Configuration Integrations . Under the Notifier Integrations section, select RHACS Cloud Service Email . Select New Integration . In the Integration name field, enter a name for your email integration. Specify the email address to which you want to send the email notifications in the Default recipient field. Optional: Enter an annotation key in Annotation key for recipient . You can use annotations to dynamically determine an email recipient. To do this: Add an annotation similar to the following example in your namespace or deployment YAML file, where email is the Annotation key that you specify in your email integration. You can create an annotation for the deployment or the namespace. Use the annotation key email in the Annotation key for recipient field. If you configured the deployment or namespace with an annotation, the RHACS Cloud Service sends the alert to the email specified in the annotation. Otherwise, it sends the alert to the default recipient. Note The following rules govern how RHACS Cloud Service determines the recipient of an email notification: If the deployment has an annotation key, the annotation's value overrides the default value. If the namespace has an annotation key, the namespace's value overrides the default value. If a deployment has an annotation key and a defined audience, RHACS Cloud Service sends an email to the audience specified in the key. If a deployment does not have an annotation key, RHACS Cloud Service checks the namespace for an annotation key and sends an email to the specified audience. If no annotation keys exist, RHACS Cloud Service sends an email to the default recipient. Additional resources Configuring delivery destinations and scheduling 17.2.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the RHACS Cloud Service Email notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment.
[ "annotations: email: <email_address>", "annotations: email: <email_address>" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-using-email
probe::stap.cache_get
probe::stap.cache_get Name probe::stap.cache_get - Found item in stap cache Synopsis stap.cache_get Values module_path the path of the .ko kernel module file source_path the path of the .c source file Description Fires just before the return of get_from_cache, when the cache grab is successful.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stap-cache-get
B.34. java-1.6.0-sun
B.34. java-1.6.0-sun B.34.1. RHSA-2011:0282 - Critical: java-1.6.0-sun security update Updated java-1.6.0-sun packages that fix several security issues are now available for Red Hat Enterprise Linux 4 Extras, and Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Sun 1.6.0 Java release includes the Sun Java 6 Runtime Environment and the Sun Java 6 Software Development Kit. CVE-2010-4422 , CVE-2010-4447 , CVE-2010-4448 , CVE-2010-4450 , CVE-2010-4451 , CVE-2010-4452 , CVE-2010-4454 , CVE-2010-4462 , CVE-2010-4463 , CVE-2010-4465 , CVE-2010-4466 , CVE-2010-4467 , CVE-2010-4468 , CVE-2010-4469 , CVE-2010-4470 , CVE-2010-4471 , CVE-2010-4472 , CVE-2010-4473 , CVE-2010-4475 , CVE-2010-4476 This update fixes several vulnerabilities in the Sun Java 6 Runtime Environment and the Sun Java 6 Software Development Kit. Further information about these flaws can be found on the Oracle Java SE and Java for Business Critical Patch Update Advisory page. All users of java-1.6.0-sun are advised to upgrade to these updated packages, which resolve these issues. All running instances of Sun Java must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/java-1_6_0-sun
function::sa_handler_str
function::sa_handler_str Name function::sa_handler_str - Returns the string representation of an sa_handler Synopsis Arguments handler the sa_handler to convert to string. Description Returns the string representation of an sa_handler. If it is not SIG_DFL, SIG_IGN or SIG_ERR, it will return the address of the handler.
[ "sa_handler_str(handler:)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sa-handler-str
Automatically installing RHEL
Automatically installing RHEL Red Hat Enterprise Linux 8 Deploying RHEL on one or more systems from a predefined configuration Red Hat Customer Content Services
[ "dmesg|tail", "su -", "dmesg|tail [288954.686557] usb 2-1.8: New USB device strings: Mfr=0, Product=1, SerialNumber=2 [288954.686559] usb 2-1.8: Product: USB Storage [288954.686562] usb 2-1.8: SerialNumber: 000000009225 [288954.712590] usb-storage 2-1.8:1.0: USB Mass Storage device detected [288954.712687] scsi host6: usb-storage 2-1.8:1.0 [288954.712809] usbcore: registered new interface driver usb-storage [288954.716682] usbcore: registered new interface driver uas [288955.717140] scsi 6:0:0:0: Direct-Access Generic STORAGE DEVICE 9228 PQ: 0 ANSI: 0 [288955.717745] sd 6:0:0:0: Attached scsi generic sg4 type 0 [288961.876382] sd 6:0:0:0: sdd Attached SCSI removable disk", "dd if=/image_directory/image.iso of=/dev/device", "dd if=/home/testuser/Downloads/rhel-8-x86_64-boot.iso of=/dev/sdd", "diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.3 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 400.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Apple_CoreStorage 98.8 GB disk0s4 5: Apple_Boot Recovery HD 650.0 MB disk0s5 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS YosemiteHD *399.6 GB disk1 Logical Volume on disk0s1 8A142795-8036-48DF-9FC5-84506DFBB7B2 Unlocked Encrypted /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *8.1 GB disk2 1: Windows_NTFS SanDisk USB 8.1 GB disk2s1", "diskutil unmountDisk /dev/disknumber Unmount of all volumes on disknumber was successful", "sudo dd if= /Users/user_name/Downloads/rhel-8-x86_64-boot.iso of= /dev/rdisk2 bs= 512K status= progress", "yum install nfs-utils", "/ exported_directory / clients", "/rhel8-install *", "systemctl start nfs-server.service", "systemctl reload nfs-server.service", "mkdir /mnt/rhel8-install/", "mount -o loop,ro -t iso9660 /image_directory/image.iso /mnt/rhel8-install/", "cp -r /mnt/rhel8-install/ /var/www/html/", "systemctl start httpd.service", "systemctl enable firewalld", "systemctl start firewalld", "firewall-cmd --add-port min_port - max_port /tcp --permanent firewall-cmd --add-service ftp --permanent", "firewall-cmd --reload", "mkdir /mnt/rhel8-install", "mount -o loop,ro -t iso9660 /image-directory/image.iso /mnt/rhel8-install", "mkdir /var/ftp/rhel8-install cp -r /mnt/rhel8-install/ /var/ftp/", "restorecon -r /var/ftp/rhel8-install find /var/ftp/rhel8-install -type f -exec chmod 444 {} \\; find /var/ftp/rhel8-install -type d -exec chmod 755 {} \\;", "systemctl start vsftpd.service", "systemctl restart vsftpd.service", "systemctl enable vsftpd", "install dhcp-server", "option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }", "systemctl enable --now dhcpd", "install dhcp-server", "option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }", "systemctl enable --now dhcpd6", "IPv6_rpfilter=no", "yum install httpd", "mkdir -p /var/www/html/redhat/", "mkdir -p /var/www/html/redhat/iso/", "mount -o loop,ro -t iso9660 path-to-RHEL-DVD.iso /var/www/html/redhat/iso", "cp -r /var/www/html/redhat/iso/images /var/www/html/redhat/ cp -r /var/www/html/redhat/iso/EFI /var/www/html/redhat/", "chmod 644 /var/www/html/redhat/EFI/BOOT/grub.cfg", "set default=\"1\" function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus insmod all_video } load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set timeout=60 # END /etc/grub.d/00_header # search --no-floppy --set=root -l ' RHEL-9-3-0-BaseOS-x86_64 ' # BEGIN /etc/grub.d/10_linux # menuentry 'Install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Test this media & install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } submenu 'Troubleshooting -->' { menuentry 'Install Red Hat Enterprise Linux 9.3 in text mode' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.text quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.rescue quiet initrdefi ../../images/pxeboot/initrd.img } }", "chmod 755 /var/www/html/redhat/EFI/BOOT/BOOTX64.EFI", "firewall-cmd --zone public --add-port={80/tcp,67/udp,68/udp,546/udp,547/udp}", "firewall-cmd --reload", "systemctl enable --now httpd", "chmod -cR u=rwX,g=rX,o=rX /var/www/html", "restorecon -FvvR /var/www/html", "install dhcp-server", "option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }", "systemctl enable --now dhcpd", "install dhcp-server", "option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }", "systemctl enable --now dhcpd6", "IPv6_rpfilter=no", "yum install tftp-server", "firewall-cmd --add-service=tftp", "mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro", "cp -pr /mount_point/BaseOS/Packages/syslinux-tftpboot-version-architecture.rpm /my_local_directory", "umount /mount_point", "rpm2cpio syslinux-tftpboot-version-architecture.rpm | cpio -dimv", "mkdir /var/lib/tftpboot/pxelinux", "cp /my_local_directory/tftpboot/* /var/lib/tftpboot/pxelinux", "mkdir /var/lib/tftpboot/pxelinux/pxelinux.cfg", "default vesamenu.c32 prompt 1 timeout 600 display boot.msg label linux menu label ^Install system menu default kernel images/RHEL-8/vmlinuz append initrd=images/RHEL-8/initrd.img ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ label vesa menu label Install system with ^basic video driver kernel images/RHEL-8/vmlinuz append initrd=images/RHEL-8/initrd.img ip=dhcp inst.xdriver=vesa nomodeset inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ label rescue menu label ^Rescue installed system kernel images/RHEL-8/vmlinuz append initrd=images/RHEL-8/initrd.img inst.rescue inst.repo=http:///192.168.124.2/RHEL-8/x86_64/iso-contents-root/ label local menu label Boot from ^local drive localboot 0xffff", "mkdir -p /var/lib/tftpboot/pxelinux/images/RHEL-8/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/images/RHEL-8/", "systemctl enable --now tftp.socket", "yum install tftp-server", "firewall-cmd --add-service=tftp", "mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro", "mkdir /var/lib/tftpboot/redhat cp -r /mount_point/EFI /var/lib/tftpboot/redhat/ umount /mount_point", "chmod -R 755 /var/lib/tftpboot/redhat/", "set timeout=60 menuentry 'RHEL 8' { linux images/RHEL-8/vmlinuz ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ initrd images/RHEL-8/initrd.img }", "mkdir -p /var/lib/tftpboot/images/RHEL-8/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img}/var/lib/tftpboot/images/RHEL-8/", "systemctl enable --now tftp.socket", "yum install tftp-server dhcp-server", "firewall-cmd --add-service=tftp", "grub2-mknetdir --net-directory=/var/lib/tftpboot Netboot directory for powerpc-ieee1275 created. Configure your DHCP server to point to /boot/grub2/powerpc-ieee1275/core.elf", "yum install grub2-ppc64-modules", "set default=0 set timeout=5 echo -e \"\\nWelcome to the Red Hat Enterprise Linux 8 installer!\\n\\n\" menuentry 'Red Hat Enterprise Linux 8' { linux grub2-ppc64/vmlinuz ro ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ initrd grub2-ppc64/initrd.img }", "mount -t iso9660 /path_to_image/name_of_iso/ /mount_point -o loop,ro", "cp /mount_point/ppc/ppc64/{initrd.img,vmlinuz} /var/lib/tftpboot/grub2-ppc64/", "subnet 192.168.0.1 netmask 255.255.255.0 { allow bootp; option routers 192.168.0.5; group { #BOOTP POWER clients filename \"boot/grub2/powerpc-ieee1275/core.elf\"; host client1 { hardware ethernet 01:23:45:67:89:ab; fixed-address 192.168.0.112; } } }", "systemctl enable --now dhcpd", "systemctl enable --now tftp.socket", "mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer", "mokutil --reset", "rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno= <number>", "rd.dasd=0.0.0200 rd.dasd=0.0.0202(ro),0.0.0203(ro:failfast),0.0.0205-0.0.0207", "rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000 rd.zfcp=0.0.4000", "ro ramdisk_size=40000 cio_ignore=all,!condev inst.repo=http://example.com/path/to/repository rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0,portname=foo ip=192.168.17.115::192.168.17.254:24:foobar.systemz.example.com:enc600:none nameserver=192.168.17.1 rd.dasd=0.0.0200 rd.dasd=0.0.0202 rd.zfcp=0.0.4000,0x5005076300c213e9,0x5022000000000000 rd.zfcp=0.0.5000,0x5005076300dab3e9,0x5022000000000000 inst.ks=http://example.com/path/to/kickstart", "images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408", "qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"", "SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"", "DNS=\"10.1.2.3:10.3.2.1\"", "SEARCHDNS=\"subdomain.domain:domain\"", "DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"", "FCP_ n =\" device_bus_ID [ WWPN FCP_LUN ]\"", "FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\" FCP_2=\"0.0.4000\"", "inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/", "ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" inst.vnc inst.repo=http://example.com/path/to/dvd-contents", "NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"", "logon user here", "cp ipl cms", "query disk", "cp query virtual storage", "cp query virtual osa", "cp query virtual dasd", "cp query virtual fcp", "yum install pykickstart", "ksvalidator -v RHEL8 /path/to/kickstart.ks", "cat /root/anaconda-ks.cfg", "yum install pykickstart", "ksvalidator -v RHEL8 /path/to/kickstart.ks", "yum install pykickstart", "ksvalidator -v RHEL8 /path/to/kickstart.ks", "yum install nfs-utils", "/ exported_directory / clients", "/rhel8-install *", "systemctl start nfs-server.service", "systemctl reload nfs-server.service", "yum install httpd", "yum install httpd mod_ssl", "systemctl start httpd.service", "yum install vsftpd", "systemctl enable firewalld systemctl start firewalld", "firewall-cmd --add-port min_port - max_port /tcp --permanent firewall-cmd --add-service ftp --permanent firewall-cmd --reload", "restorecon -r /var/ftp/ your-kickstart-file.ks chmod 444 /var/ftp/ your-kickstart-file.ks", "systemctl start vsftpd.service", "systemctl restart vsftpd.service", "systemctl enable vsftpd", "lsblk -l -p -o name,rm,ro,hotplug,size,type,mountpoint,uuid", "umount /dev/xyz", "lsblk -l -p", "e2label /dev/xyz OEMDRV", "xfs_admin -L OEMDRV /dev/xyz", "umount /dev/xyz", "append initrd=initrd.img inst.ks=http://10.32.5.1/mnt/archive/RHEL-8/8.x/x86_64/kickstarts/ks.cfg", "kernel vmlinuz inst.ks=http://10.32.5.1/mnt/archive/RHEL-8/8.x/x86_64/kickstarts/ks.cfg", "cp link tcpmaint 592 592 acc 592 fm", "ftp <host> (secure", "cd / location/of/install-tree /images/ ascii get generic.prm (repl get redhat.exec (repl locsite fix 80 binary get kernel.img (repl get initrd.img (repl quit", "VMUSER FILELIST A0 V 169 Trunc=169 Size=6 Line=1 Col=1 Alt=0 Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time REDHAT EXEC B1 V 22 1 1 4/15/10 9:30:40 GENERIC PRM B1 V 44 1 1 4/15/10 9:30:32 INITRD IMG B1 F 80 118545 2316 4/15/10 9:30:25 KERNEL IMG B1 F 80 74541 912 4/15/10 9:30:17", "redhat", "cp ipl DASD_device_number loadparm boot_entry_number", "cp ipl eb1c loadparm 0", "cp set loaddev portname WWPN lun LUN bootprog boot_entry_number", "cp set loaddev portname 50050763 050b073d lun 40204011 00000000 bootprog 0", "query loaddev", "cp ipl FCP_device", "cp ipl fc00", "cp set loaddev portname WWPN lun FCP_LUN bootprog 1", "cp set loaddev portname 20010060 eb1c0103 lun 00010000 00000000 bootprog 1", "cp query loaddev", "cp ipl FCP_device", "cp ipl fc00", "subscription-manager register --activationkey= <activation_key_name> --org= <organization_ID>", "The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96", "subscription-manager syspurpose role --set \"VALUE\"", "subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"", "subscription-manager syspurpose role --list", "subscription-manager syspurpose role --unset", "subscription-manager syspurpose service-level --set \"VALUE\"", "subscription-manager syspurpose service-level --set \"Standard\"", "subscription-manager syspurpose service-level --list", "subscription-manager syspurpose service-level --unset", "subscription-manager syspurpose usage --set \"VALUE\"", "subscription-manager syspurpose usage --set \"Production\"", "subscription-manager syspurpose usage --list", "subscription-manager syspurpose usage --unset", "subscription-manager syspurpose --show", "man subscription-manager", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled", "CP ATTACH EB1C TO *", "CP LINK RHEL7X 4B2E 4B2E MR DASD 4B2E LINKED R/W", "cio_ignore -r device_number", "cio_ignore -r 4b2e", "chccwdev -e device_number", "chccwdev -e 4b2e", "cd /root # dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X4B2E Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%", "Rereading the partition table Exiting", "fdasd -a /dev/disk/by-path/ccw-0.0.4b2e reading volume label ..: VOL1 reading vtoc ..........: ok auto-creating one partition for the whole disk writing volume label writing VTOC rereading partition table", "machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf", "title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda Device driver name..............: dasd DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 13356 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 262152 Building bootmap in '/boot' Building menu 'zipl-automatic-menu' Adding #1: IPL section '4.18.0-80.el8.s390x' (default) initial ramdisk...: /boot/initramfs-4.18.0-80.el8.s390x.img kernel image......: /boot/vmlinuz-4.18.0-80.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' component address: kernel image....: 0x00010000-0x0049afff parmline........: 0x0049b000-0x0049bfff initial ramdisk.: 0x004a0000-0x01a26fff internal loader.: 0x0000a000-0x0000cfff Preparing boot menu Interactive prompt......: enabled Menu timeout............: 5 seconds Default configuration...: '4.18.0-80.el8.s390x' Preparing boot device: dasda (0201). Syncing disks Done.", "0.0.0207 0.0.0200 use_diag=1 readonly=1", "cio_ignore -r device_number", "cio_ignore -r 021a", "echo add > /sys/bus/ccw/devices/ dasd-bus-ID /uevent", "echo add > /sys/bus/ccw/devices/0.0.021a/uevent", "machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf", "title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf' Target device information Device..........................: 08:00 Partition.......................: 08:01 Device name.....................: sda Device driver name..............: sd Type............................: disk partition Disk layout.....................: SCSI disk layout Geometry - start................: 2048 File system block size..........: 4096 Physical block size.............: 512 Device size in physical blocks..: 10074112 Building bootmap in '/boot/' Building menu 'rh-automatic-menu' Adding #1: IPL section '4.18.0-32.el8.s390x' (default) kernel image......: /boot/vmlinuz-4.18.0-32.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' initial ramdisk...: /boot/initramfs-4.18.0-32.el8.s390x.img component address: kernel image....: 0x00010000-0x007a21ff parmline........: 0x00001000-0x000011ff initial ramdisk.: 0x02000000-0x028f63ff internal loader.: 0x0000a000-0x0000a3ff Preparing boot device: sda. Detected SCSI PCBIOS disk layout. Writing SCSI master boot record. Syncing disks Done.", "0.0.fc00 0x5105074308c212e9 0x401040a000000000 0.0.fc00 0x5105074308c212e9 0x401040a100000000 0.0.fc00 0x5105074308c212e9 0x401040a300000000 0.0.fcd0 0x5105074308c2aee9 0x401040a000000000 0.0.fcd0 0x5105074308c2aee9 0x401040a100000000 0.0.fcd0 0x5105074308c2aee9 0x401040a300000000 0.0.4000 0.0.5000", "cio_ignore -r device_number", "cio_ignore -r fcfc", "echo add > /sys/bus/ccw/devices/device-bus-ID/uevent", "echo add > /sys/bus/ccw/devices/0.0.fcfc/uevent", "lsmod | grep qeth qeth_l3 69632 0 qeth_l2 49152 1 qeth 131072 2 qeth_l3,qeth_l2 qdio 65536 3 qeth,qeth_l3,qeth_l2 ccwgroup 20480 1 qeth", "modprobe qeth", "cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id", "cio_ignore -r 0.0.f500,0.0.f501,0.0.f502", "znetconf -u Scanning for network devices Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO) 00 qeth 0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO) 01 qeth 0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets 02 qeth", "znetconf -a f500 Scanning for network devices Successfully configured device 0.0.f500 (encf500)", "znetconf -a f500 -o portname=myname Scanning for network devices Successfully configured device 0.0.f500 (encf500)", "echo read_device_bus_id,write_device_bus_id,data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group", "echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group", "ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500", "echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online", "cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online 1", "cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name encf500", "lsqeth encf500 Device name : encf500 ------------------------------------------------- card_type : OSD_1000 cdev0 : 0.0.f500 cdev1 : 0.0.f501 cdev2 : 0.0.f502 chpid : 76 online : 1 portname : OSAPORT portno : 0 state : UP (LAN ONLINE) priority_queueing : always queue 0 buffer_count : 16 layer2 : 1 isolation : none", "cd /etc/sysconfig/network-scripts # cp ifcfg-enc9a0 ifcfg-enc600", "lsqeth -p devices CHPID interface cardtype port chksum prio-q'ing rtr4 rtr6 lay'2 cnt -------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- ----- 0.0.09a0/0.0.09a1/0.0.09a2 x00 enc9a0 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64 0.0.0600/0.0.0601/0.0.0602 x00 enc600 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64", "IBM QETH DEVICE=enc9a0 BOOTPROTO=static IPADDR=10.12.20.136 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.09a0,0.0.09a1,0.0.09a2 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:23:65:1a TYPE=Ethernet", "IBM QETH DEVICE=enc600 BOOTPROTO=static IPADDR=192.168.70.87 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:b3:84:ef TYPE=Ethernet", "cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id", "cio_ignore -r 0.0.0600,0.0.0601,0.0.0602", "echo add > /sys/bus/ccw/devices/read-channel/uevent", "echo add > /sys/bus/ccw/devices/0.0.0600/uevent", "lsqeth", "ifup enc600", "ip addr show enc600 3: enc600: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff inet 10.85.1.245/24 brd 10.34.3.255 scope global dynamic enc600 valid_lft 81487sec preferred_lft 81487sec inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic valid_lft 2591994sec preferred_lft 604794sec inet6 fe45::a455:eff:d078:3847/64 scope link valid_lft forever preferred_lft forever", "ip route default via 10.85.1.245 dev enc600 proto static metric 1024 12.34.4.95/24 dev enp0s25 proto kernel scope link src 12.34.4.201 12.38.4.128 via 12.38.19.254 dev enp0s25 proto dhcp metric 1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1", "ping -c 1 192.168.70.8 PING 192.168.70.8 (192.168.70.8) 56(84) bytes of data. 64 bytes from 192.168.70.8: icmp_seq=0 ttl=63 time=8.07 ms", "machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf", "root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‐server.subdomain.domain:enc9a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us", "subscription-manager unregister", "%packages @^Infrastructure Server %end", "%packages @X Window System @Desktop @Sound and Video %end", "%packages sqlite curl aspell docbook* %end", "%packages @module:stream/profile %end", "%packages -@Graphical Administration Tools -autofs -ipa*compat %end", "%packages --multilib --ignoremissing", "%packages --multilib --default %end", "%packages @Graphical Administration Tools --optional %end", "%pre --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end", "%pre --log=/tmp/ks-pre.log", "%pre-install --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end", "%pre-install --log=/mnt/sysroot/root/ks-pre.log", "%post --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end", "%post --interpreter=/usr/libexec/platform-python", "%post --nochroot cp /etc/resolv.conf /mnt/sysroot/etc/resolv.conf %end", "%post --log=/root/ks-post.log", "%post --nochroot --log=/mnt/sysroot/root/ks-post.log", "Start of the %post section with logging into /root/ks-post.log %post --log=/root/ks-post.log Mount an NFS share mkdir /mnt/temp mount -o nolock 10.10.0.2:/usr/new-machines /mnt/temp openvt -s -w -- /mnt/temp/runme umount /mnt/temp End of the %post section %end", "%post --log=/root/ks-post.log subscription-manager register [email protected] --password=secret --auto-attach %end", "%anaconda pwpolicy root --minlen=10 --strict %end", "%onerror --interpreter=/usr/libexec/platform-python", "%addon com_redhat_kdump --enable --reserve-mb=auto %end", "cdrom", "cmdline", "driverdisk [ partition |--source= url |--biospart= biospart ]", "driverdisk --source=ftp://path/to/dd.img driverdisk --source=http://path/to/dd.img driverdisk --source=nfs:host:/path/to/dd.img", "driverdisk LABEL = DD :/e1000.rpm", "eula [--agreed]", "firstboot OPTIONS", "graphical [--non-interactive]", "halt", "harddrive OPTIONS", "harddrive --partition=hdb2 --dir=/tmp/install-tree", "install installation_method", "liveimg --url= SOURCE [ OPTIONS ]", "liveimg --url=file:///images/install/squashfs.img --checksum=03825f567f17705100de3308a20354b4d81ac9d8bed4bb4692b2381045e56197 --noverifyssl", "logging OPTIONS", "mediacheck", "nfs OPTIONS", "nfs --server=nfsserver.example.com --dir=/tmp/install-tree", "ostreesetup --osname= OSNAME [--remote= REMOTE ] --url= URL --ref= REF [--nogpg]", "poweroff", "reboot OPTIONS", "shutdown", "sshpw --username= name [ OPTIONS ] password", "python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "sshpw --username= example_username example_password --plaintext sshpw --username=root example_password --lock", "sshpw --username=root example_password --lock", "text [--non-interactive]", "url --url= FROM [ OPTIONS ]", "url --url=http:// server / path", "url --url=ftp:// username : password @ server / path", "vnc [--host= host_name ] [--port= port ] [--password= password ]", "hmc", "%include path/to/file", "%ksappend path/to/file", "authconfig [ OPTIONS ]", "authselect [ OPTIONS ]", "firewall --enabled|--disabled [ incoming ] [ OPTIONS ]", "group --name= name [--gid= gid ]", "keyboard --vckeymap|--xlayouts OPTIONS", "keyboard --xlayouts=us,'cz (qwerty)' --switch=grp:alt_shift_toggle", "lang language [--addsupport= language,... ]", "lang en_US --addsupport=cs_CZ,de_DE,en_UK", "lang en_US", "module --name= NAME [--stream= STREAM ]", "repo --name= repoid [--baseurl= url |--mirrorlist= url |--metalink= url ] [ OPTIONS ]", "rootpw [--iscrypted|--plaintext] [--lock] password", "python -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "selinux [--disabled|--enforcing|--permissive]", "services [--disabled= list ] [--enabled= list ]", "services --disabled=auditd,cups,smartd,nfslock", "services --disabled=auditd, cups, smartd, nfslock", "skipx", "sshkey --username= user \"ssh_key\"", "syspurpose [ OPTIONS ]", "syspurpose --role=\"Red Hat Enterprise Linux Server\"", "timezone timezone [ OPTIONS ]", "user --name= username [ OPTIONS ]", "python -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "xconfig [--startxonboot]", "network OPTIONS", "network --bootproto=dhcp", "network --bootproto=bootp", "network --bootproto=ibft", "network --bootproto=static --ip=10.0.2.15 --netmask=255.255.255.0 --gateway=10.0.2.254 --nameserver=10.0.2.1", "network --bootproto=static --ip=10.0.2.15 --netmask=255.255.255.0 --gateway=10.0.2.254 --nameserver=192.168.2.1,192.168.3.1", "network --bootproto=dhcp --device=em1", "network --device ens3 --ipv4-dns-search domain1.example.com,domain2.example.com", "network --device=bond0 --bondslaves=em1,em2", "network --bondopts=mode=active-backup,balance-rr;primary=eth1", "network --device=em1 --vlanid=171 --interfacename=vlan171", "network --teamslaves=\"p3p1'{\\\"prio\\\": -10, \\\"sticky\\\": true}',p3p2'{\\\"prio\\\": 100}'\"", "network --device team0 --activate --bootproto static --ip=10.34.102.222 --netmask=255.255.255.0 --gateway=10.34.102.254 --nameserver=10.34.39.2 --teamslaves=\"p3p1'{\\\"prio\\\": -10, \\\"sticky\\\": true}',p3p2'{\\\"prio\\\": 100}'\" --teamconfig=\"{\\\"runner\\\": {\\\"name\\\": \\\"activebackup\\\"}}\"", "network --device=bridge0 --bridgeslaves=em1", "realm join [ OPTIONS ] domain", "device moduleName --opts= options", "device --opts=\"aic152x=0x340 io=11\"", "ignoredisk --drives= drive1,drive2 ,... | --only-use= drive", "ignoredisk --only-use=sda", "ignoredisk --only-use=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017", "ignoredisk --only-use==/dev/disk/by-id/dm-uuid-mpath-", "bootloader --location=mbr", "ignoredisk --drives=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017", "part / --fstype=xfs --onpart=sda1", "part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1", "clearpart OPTIONS", "clearpart --drives=hda,hdb --all", "clearpart --drives=disk/by-id/scsi-58095BEC5510947BE8C0360F604351918", "clearpart --drives=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017", "clearpart --initlabel --drives=names_of_disks", "clearpart --initlabel --drives=dasda,dasdb,dasdc", "clearpart --list=sda2,sda3,sdb1", "part / --fstype=xfs --onpart=sda1", "part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1", "zerombr", "bootloader [ OPTIONS ]", "bootloader --location=mbr --append=\"hdd=ide-scsi ide=nodma\"", "%packages -plymouth %end", "bootloader --driveorder=sda,hda", "bootloader --iscrypted --password=grub.pbkdf2.sha512.10000.5520C6C9832F3AC3D149AC0B24BE69E2D4FB0DBEEDBD29CA1D30A044DE2645C4C7A291E585D4DC43F8A4D82479F8B95CA4BA4381F8550510B75E8E0BB2938990.C688B6F0EF935701FF9BD1A8EC7FE5BD2333799C98F28420C5CC8F1A2A233DE22C83705BB614EA17F3FDFDF4AC2161CEA3384E56EB38A2E39102F5334C47405E", "part / --fstype=xfs --onpart=sda1", "part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1", "autopart OPTIONS", "reqpart [--add-boot]", "part|partition mntpoint [ OPTIONS ]", "swap --recommended", "swap --hibernation", "partition /home --onpart=hda1", "partition pv.1 --onpart=hda2", "partition pv.1 --onpart=hdb", "part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017", "part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"", "part / --fstype=xfs --onpart=sda1", "part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1", "raid mntpoint --level= level --device= device-name partitions*", "part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"", "part raid.01 --size=6000 --ondisk=sda part raid.02 --size=6000 --ondisk=sdb part raid.03 --size=6000 --ondisk=sdc part swap --size=512 --ondisk=sda part swap --size=512 --ondisk=sdb part swap --size=512 --ondisk=sdc part raid.11 --size=1 --grow --ondisk=sda part raid.12 --size=1 --grow --ondisk=sdb part raid.13 --size=1 --grow --ondisk=sdc raid / --level=1 --device=rhel8-root --label=rhel8-root raid.01 raid.02 raid.03 raid /home --level=5 --device=rhel8-home --label=rhel8-home raid.11 raid.12 raid.13", "volgroup name [ OPTIONS ] [ partition *]", "volgroup rhel00 --useexisting --noformat", "part pv.01 --size 10000 volgroup my_volgrp pv.01 logvol / --vgname=my_volgrp --size=2000 --name=root", "logvol mntpoint --vgname= name --name= name [ OPTIONS ]", "swap --recommended", "swap --hibernation", "part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"", "part pv.01 --size 3000 volgroup myvg pv.01 logvol / --vgname=myvg --size=2000 --name=rootvol", "part pv.01 --size 1 --grow volgroup myvg pv.01 logvol / --vgname=myvg --name=rootvol --percent=90", "snapshot vg_name/lv_name --name= snapshot_name --when= pre-install|post-install", "mount [ OPTIONS ] device mountpoint", "fcoe --nic= name [ OPTIONS ]", "iscsi --ipaddr= address [ OPTIONS ]", "iscsiname iqname", "nvdimm action [ OPTIONS ]", "nvdimm reconfigure [--namespace= NAMESPACE ] [--mode= MODE ] [--sectorsize= SECTORSIZE ]", "nvdimm reconfigure --namespace=namespace0.0 --mode=sector --sectorsize=512", "nvdimm reconfigure --namespace=namespace0.0 --mode=sector --sectorsize=512", "nvdimm use [--namespace= NAMESPACE |--blockdevs= DEVICES ]", "nvdimm use --namespace=namespace0.0", "nvdimm use --blockdevs=pmem0s,pmem1s nvdimm use --blockdevs=pmem*", "zfcp --devnum= devnum [--wwpn= wwpn --fcplun= lun ]", "zfcp --devnum=0.0.4000 --wwpn=0x5005076300C213e9 --fcplun=0x5022000000000000 zfcp --devnum=0.0.4000", "%addon com_redhat_kdump [ OPTIONS ] %end", "%addon com_redhat_kdump --enable --reserve-mb=128 %end", "%addon org_fedora_oscap key = value %end", "%addon org_fedora_oscap content-type = scap-security-guide profile = xccdf_org.ssgproject.content_profile_pci-dss %end", "%addon org_fedora_oscap content-type = datastream content-url = http://www.example.com/scap/testing_ds.xml datastream-id = scap_example.com_datastream_testing xccdf-id = scap_example.com_cref_xccdf.xml profile = xccdf_example.com_profile_my_profile fingerprint = 240f2f18222faa98856c3b4fc50c4195 %end", "pwpolicy name [--minlen= length ] [--minquality= quality ] [--strict|--notstrict] [--emptyok|--notempty] [--changesok|--nochanges]", "rescue [--nomount|--romount]", "rescue [--nomount|--romount]", "inst.stage2=https://hostname/path_to_install_image/ inst.noverifyssl", "inst.repo=https://hostname/path_to_install_repository/ inst.noverifyssl", "inst.stage2.all inst.stage2=http://hostname1/path_to_install_tree/ inst.stage2=http://hostname2/path_to_install_tree/ inst.stage2=http://hostname3/path_to_install_tree/", "[PROTOCOL://][USERNAME[:PASSWORD]@]HOST[:PORT]", "inst.nosave=Input_ks,logs", "ifname=eth0:01:23:45:67:89:ab", "vlan=vlan5:enp0s1", "bond=bond0:enp0s1,enp0s2:mode=active-backup,tx_queues=32,downdelay=5000", "team=team0:enp0s1,enp0s2", "bridge=bridge0:enp0s1,enp0s2", "modprobe.blacklist=ahci,firewire_ohci", "modprobe.blacklist=virtio_blk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/automatically_installing_rhel/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_program_policy_guide/con-conscious-language-message
Compatibility Guide
Compatibility Guide Red Hat Ceph Storage 8 Red Hat Ceph Storage and Its Compatibility With Other Products Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/compatibility_guide/index
Chapter 7. Global File System 2
Chapter 7. Global File System 2 The Red Hat Global File System 2 (GFS2) is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs distributed metadata and multiple journals. GFS2 is based on 64-bit architecture, which can theoretically accommodate an 8 exabyte file system. However, the current supported maximum size of a GFS2 file system is 100 TB. If a system requires GFS2 file systems larger than 100 TB, contact your Red Hat service representative. When determining the size of a file system, consider its recovery needs. Running the fsck command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk-subsystem failure, recovery time is limited by the speed of backup media. When configured in a Red Hat Cluster Suite, Red Hat GFS2 nodes can be configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS2 then provides data sharing among GFS2 nodes in a Red Hat cluster, with a single, consistent view of the file system namespace across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way that processes on the same node can share files on a local file system, with no discernible difference. For information about the Red Hat Cluster Suite, see Red Hat's Cluster Administration guide. A GFS2 must be built on a logical volume (created with LVM) that is a linear or mirrored volume. Logical volumes created with LVM in a Red Hat Cluster suite are managed with CLVM (a cluster-wide implementation of LVM), enabled by the CLVM daemon clvmd , and running in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. For information on the Logical Volume Manager, see Red Hat's Logical Volume Manager Administration guide. The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes. For comprehensive information on the creation and configuration of GFS2 file systems in clustered and non-clustered storage, see Red Hat's Global File System 2 guide.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-gfs2
Chapter 139. KafkaBridgeProducerSpec schema reference
Chapter 139. KafkaBridgeProducerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeProducerSpec schema properties Configures producer options for the Kafka Bridge. Example Kafka Bridge producer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: enabled: true config: acks: 1 delivery.timeout.ms: 300000 # ... Use the producer.config properties to configure Kafka options for the producer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Properties with the following prefixes cannot be set: bootstrap.servers sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Bridge, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate the keys or values of config properties. If an invalid configuration is provided, the Kafka Bridge deployment might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. 139.1. KafkaBridgeProducerSpec schema properties Property Property type Description enabled boolean Whether the HTTP producer should be enabled or disabled. The default is enabled ( true ). config map The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: enabled: true config: acks: 1 delivery.timeout.ms: 300000 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeProducerSpec-reference
Chapter 11. Known issues
Chapter 11. Known issues This part describes known issues in Red Hat Enterprise Linux 8.7. 11.1. Installer and image creation During RHEL installation on IBM Z, udev does not assign predictable interface names to RoCE cards enumerated by FID If you start a RHEL 8.7 or later installation with the net.naming-scheme=rhel-8.7 kernel command-line option, the udev device manager on the RHEL installation media ignores this setting for RoCE cards enumerated by the function identifier (FID). As a consequence, udev assigns unpredictable interface names to these devices. There is no workaround during the installation, but you can configure the feature after the installation. For further details, see Determining a predictable RoCE device name on the IBM Z platform . (JIRA:RHEL-11397) Installation fails on IBM Power 10 systems with LPAR and secure boot enabled RHEL installer is not integrated with static key secure boot on IBM Power 10 systems. Consequently, when logical partition (LPAR) is enabled with the secure boot option, the installation fails with the error, Unable to proceed with RHEL-x.x Installation . To work around this problem, install RHEL without enabling secure boot. After booting the system: Copy the signed Kernel into the PReP partition using the dd command. Restart the system and enable secure boot. Once the firmware verifies the bootloader and the kernel, the system boots up successfully. For more information, see https://www.ibm.com/support/pages/node/6528884 (BZ#2025814) Unexpected SELinux policies on systems where Anaconda is running as an application When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the -image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue. ( BZ#2050140 ) The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) The USB CD-ROM drive is not available as an installation source in Anaconda Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk. To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail. ( BZ#1914955 ) Network access is not enabled by default in the installation program Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled. To work around this problem, add ip=dhcp to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used. (BZ#1757877) Hard drive partitioned installations with iso9660 filesystem fails You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD. To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts. Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk. As a result, installations work as expected without any errors. ( BZ#1929105 ) IBM Power systems with HASH MMU mode fail to boot with memory allocation failures IBM Power Systems with HASH memory allocation unit (MMU) mode support kdump up to a maximum of 192 cores. Consequently, the system fails to boot with memory allocation failures if kdump is enabled on more than 192 cores. This limitation is due to RMA memory allocations during early boot in HASH MMU mode. To work around this problem, use the Radix MMU mode with fadump enabled instead of using kdump . (BZ#2028361) RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload When deploying rpm-ostree payloads, used for example in a RHEL for Edge installer image, the installer does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error: To work around this issue: Use an automatic partitioning scheme and do not add any mount points manually. Manually assign mount points only inside /var directory. For example, /var/ my-mount-point ), and the following standard directories: / , /boot , /var . As a result, the installation process finishes successfully. ( BZ#2126506 ) The --size parameter of composer-cli compose start treats values as bytes instead of MiB When using the composer-cli compose start --size size_value blueprint_name image_type command, the --size parameter should use its parameter in the MiB format. However, a bug in the settings causes the composer-cli tool to treat this parameter as bytes units. To work around this issue, multiply the size value by 1048576. Alternatively, use the in your blueprint. The customization allows a more granular control over filesystems and accepts units like MiB or GiB. See Supported image customizations . ( BZ#2033192 ) 11.2. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output. In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. ( BZ#1687900 ) 11.3. Software management cr_compress_file_with_stat() can cause a memory leak The createrepo_c C library has the API cr_compress_file_with_stat() function. This function is declared with char **dst as a second parameter. Depending on its other parameters, cr_compress_file_with_stat() either uses dst as an input parameter, or uses it to return an allocated string. This unpredictable behavior can cause a memory leak, because it does not inform the user when to free dst contents. To work around this problem, a new API cr_compress_file_with_stat_v2 function has been added, which uses the dst parameter only as an input. It is declared as char *dst . This prevents memory leak. Note that the cr_compress_file_with_stat_v2 function is temporary and will be present only in RHEL 8. Later, cr_compress_file_with_stat() will be fixed instead. (BZ#1973588) YUM transactions reported as successful when a scriptlet fails Since RPM version 4.6, post-install scriptlets are allowed to fail without being fatal to the transaction. This behavior propagates up to YUM as well. This results in scriptlets which might occasionally fail while the overall package transaction reports as successful. There is no workaround available at the moment. Note that this is expected behavior that remains consistent between RPM and YUM. Any issues in scriptlets should be addressed at the package level. ( BZ#1986657 ) A security YUM upgrade fails for packages that change their architecture through the upgrade The patch for BZ#2088149 , released with the RHBA-2022:7711 advisory, introduced the following regression: The YUM upgrade using security filters fails for packages that change their architecture from or to noarch through the upgrade. Consequently, it can leave the system in a vulnerable state. To work around this problem, perform the regular upgrade without security filters. ( BZ#2088149 ) 11.4. Shells and command-line tools ipmitool is incompatible with certain server platforms The ipmitool utility serves for monitoring, configuring, and managing devices that support the Intelligent Platform Management Interface (IPMI). The current version of ipmitool uses Cipher Suite 17 by default instead of the Cipher Suite 3. Consequently, ipmitool fails to communicate with certain bare metal nodes that announced support for Cipher Suite 17 during negotiation, but do not actually support this cipher suite. As a result, ipmitool aborts with the no matching cipher suite error message. For more details, see the related Knowledgebase article . To solve this problem, update your baseboard management controller (BMC) firmware to use the Cipher Suite 17. Optionally, if the BMC firmware update is not available, you can work around this problem by forcing ipmitool to use a certain cipher suite. When invoking a managing task with ipmitool , add the -C option to the ipmitool command together with the number of the cipher suite you want to use. See the following example: ( BZ#1873614 ) ReaR fails to recreate a volume group when you do not use clean disks for restoring ReaR fails to perform recovery when you want to restore to disks that contain existing data. To work around this problem, wipe the disks manually before restoring to them if they have been previously used. To wipe the disks in the rescue environment, use one of the following commands before running the rear recover command: The dd command to overwrite the disks. The wipefs command with the -a flag to erase all available metadata. See the following example of wiping metadata from the /dev/sda disk: This command wipes the metadata from the partitions on /dev/sda first, and then the partition table itself. ( BZ#1925531 ) coreutils might report misleading EPERM error codes GNU Core Utilities ( coreutils ) started using the statx() system call. If a seccomp filter returns an EPERM error code for unknown system calls, coreutils might consequently report misleading EPERM error codes because EPERM can not be distinguished from the actual Operation not permitted error returned by a working statx() syscall. To work around this problem, update the seccomp filter to either permit the statx() syscall, or to return an ENOSYS error code for syscalls it does not know. ( BZ#2030661 ) 11.5. Infrastructure services Postfix TLS fingerprint algorithm in the FIPS mode needs to be changed to SHA-256 By default in RHEL 8, postfix uses MD5 fingerprints with the TLS for backward compatibility. But in the FIPS mode, the MD5 hashing function is not available, which may cause TLS to incorrectly function in the default postfix configuration. To workaround this problem, the hashing function needs to be changed to SHA-256 in the postfix configuration file. For more details, see the related Knowledgebase article Fix postfix TLS in the FIPS mode by switching to SHA-256 instead of MD5 . ( BZ#1711885 ) rsync fails while using the --delete and the --filter '-x string .*' option together The rsync utility for transferring and synchronizing files is unable to handle extended attributes in RHEL 8 correctly. Consequently, if you pass the --delete option together with the --filter '-x string .*' option for extended attributes to the rsync command, and a file on your system satisfies the regular expression, an error stating protocol incompatibilities occurs. For example, if you use the --filter '-x system.*' option, the filter finds the system.mwmrc file, which is present on your system, and rsync fails. See the following error message that occurs after using the --filter '-x system.*' option: To prevent this problem, use regular expressions for extended attributes with caution. ( BZ#2139118 ) The brltty package is not multilib compatible It is not possible to have both 32-bit and 64-bit versions of the brltty package installed. You can either install the 32-bit ( brltty.i686 ) or the 64-bit ( brltty.x86_64 ) version of the package. The 64-bit version is recommended. ( BZ#2008197 ) 11.6. Security File permissions of /etc/passwd- are not aligned with the CIS RHEL 8 Benchmark 1.0.0 Because of an issue with the CIS Benchmark, the remediation of the SCAP rule that ensures permissions on the /etc/passwd- backup file configures permissions to 0644 . However, the CIS Red Hat Enterprise Linux 8 Benchmark 1.0.0 requires file permissions 0600 for that file. As a consequence, the file permissions of /etc/passwd- are not aligned with the benchmark after remediation. ( BZ#1858866 ) libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the yum install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. (BZ#1666328) udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. ( BZ#1763210 ) SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. (JIRA:RHELPLAN-34199) sshd -T provides inaccurate information about Ciphers, MACs and KeX algorithms The output of the sshd -T command does not contain the system-wide crypto policy configuration or other options that could come from an environment file in /etc/sysconfig/sshd and that are applied as arguments on the sshd command. This occurs because the upstream OpenSSH project did not support the Include directive to support Red-Hat-provided cryptographic defaults in RHEL 8. Crypto policies are applied as command-line arguments to the sshd executable in the sshd.service unit during the service's start by using an EnvironmentFile . To work around the problem, use the source command with the environment file and pass the crypto policy as an argument to the sshd command, as in sshd -T USDCRYPTO_POLICY . For additional information, see Ciphers, MACs or KeX algorithms differ from sshd -T to what is provided by current crypto policy level . As a result, the output from sshd -T matches the currently configured crypto policy. (BZ#2044354) OpenSSL in FIPS mode accepts only specific D-H parameters In FIPS mode, TLS clients that use OpenSSL return a bad dh value error and abort TLS connections to servers that use manually generated parameters. This is because OpenSSL, when configured to work in compliance with FIPS 140-2, works only with Diffie-Hellman parameters compliant to NIST SP 800-56A rev3 Appendix D (groups 14, 15, 16, 17, and 18 defined in RFC 3526 and with groups defined in RFC 7919). Also, servers that use OpenSSL ignore all other parameters and instead select known parameters of similar size. To work around this problem, use only the compliant groups. (BZ#1810911) crypto-policies incorrectly allow Camellia ciphers The RHEL 8 system-wide cryptographic policies should disable Camellia ciphers in all policy levels, as stated in the product documentation. However, the Kerberos protocol enables the ciphers by default. To work around the problem, apply the NO-CAMELLIA subpolicy: In the command, replace DEFAULT with the cryptographic level name if you have switched from DEFAULT previously. As a result, Camellia ciphers are correctly disallowed across all applications that use system-wide crypto policies only when you disable them through the workaround. ( BZ#1919155 ) Smart-card provisioning process through OpenSC pkcs15-init does not work properly The file_caching option is enabled in the default OpenSC configuration, and the file caching functionality does not handle some commands from the pkcs15-init tool properly. Consequently, the smart-card provisioning process through OpenSC fails. To work around the problem, add the following snippet to the /etc/opensc.conf file: The smart-card provisioning through pkcs15-init only works if you apply the previously described workaround. ( BZ#1947025 ) Connections to servers with SHA-1 signatures do not work with GnuTLS SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy. (BZ#1628553) IKE over TCP connections do not work on custom TCP ports The tcp-remoteport Libreswan configuration option does not work properly. Consequently, an IKE over TCP connection cannot be established when a scenario requires specifying a non-default TCP port. ( BZ#1989050 ) RHV hypervisor may not work correctly when hardening the system during installation When installing Red Hat Virtualization Hypervisor (RHV-H) and applying the Red Hat Enterprise Linux 8 STIG profile, OSCAP Anaconda Add-on may harden the system as RHEL instead of RVH-H and remove essential packages for RHV-H. Consequently, the RHV hypervisor may not work. To work around the problem, install the RHV-H system without applying any profile hardening, and after the installation is complete, apply the profile by using OpenSCAP. As a result, the RHV hypervisor works correctly. ( BZ#2075508 ) Red Hat provides the CVE OVAL reports in compressed format Red Hat provides CVE OVAL feeds in the bzip2-compressed format, and they are no longer available in the XML file format. The location of feeds for RHEL 8 has been updated accordingly to reflect this change. Because referencing compressed content is not standardized, third-party SCAP scanners can have problems with scanning rules that use the feed. ( BZ#2028428 ) Certain sets of interdependent rules in SSG can fail Remediation of SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules. ( BZ#1750755 ) Server with GUI and Workstation installations are not possible with CIS Server profiles The CIS Server Level 1 and Level 2 security profiles are not compatible with the Server with GUI and Workstation software selections. As a consequence, a RHEL 8 installation with the Server with GUI software selection and CIS Server profiles is not possible. An attempted installation using the CIS Server Level 1 or Level 2 profiles and either of these software selections will generate the error message: If you need to align systems with the Server with GUI or Workstation software selections according to CIS benchmarks, use the CIS Workstation Level 1 or Level 2 profiles instead. ( BZ#1843932 ) Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8 The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap , which might cause confusion. This is necessary for backwards compatibility backward compatibility with Red Hat Enterprise Linux 7. (BZ#1665082) SSH timeout rules in STIG profiles configure incorrect options An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles: DISA STIG for RHEL 8 ( xccdf_org.ssgproject.content_profile_stig ) DISA STIG with GUI for RHEL 8 ( xccdf_org.ssgproject.content_profile_stig_gui ) In each of these profiles, the following two rules are affected: When applied to SSH servers, each of these rules configures an option ( ClientAliveCountMax and ClientAliveInterval ) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 8 and DISA STIG with GUI for RHEL 8 profiles until a solution is developed. ( BZ#2038977 ) Bash remediations of certain Audit rules do not work correctly SCAP Security Guide (SSG) Bash remediations for the following SCAP rules do not add the Audit key: audit_rules_login_events audit_rules_login_events_faillock audit_rules_login_events_lastlog audit_rules_login_events_tallylog audit_rules_usergroup_modification audit_rules_usergroup_modification_group audit_rules_usergroup_modification_gshadow audit_rules_usergroup_modification_opasswd audit_rules_usergroup_modification_passwd audit_rules_usergroup_modification_shadow audit_rules_time_watch_localtime audit_rules_mac_modification audit_rules_networkconfig_modification audit_rules_sysadmin_actions audit_rules_session_events audit_rules_sudoers audit_rules_sudoers_d Consequently, remediation scripts fix access bits and paths in the remediated rules, but the rules without the Audit key do not conform to the OVAL check. Therefore, scans after remediations of such rules report FAIL . To work around the problem, add the keys to the affected rules manually. ( BZ#2119356 ) Certain rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog : To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. ( BZ#1679512 ) Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. (JIRA:RHELPLAN-10431) Remediating service-related rules during kickstart installations might fail During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues. (BZ#1834716) 11.7. Networking NetworkManager does not support activating bond and team ports in a specific order NetworkManager activates interfaces alphabetically by interface names. However, if an interface appears later during the boot, for example, because the kernel needs more time to discover it, NetworkManager activates this interface later. NetworkManager does not support setting a priority on bond and team ports. Consequently, the order in which NetworkManager activates ports of these devices is not always predictable. To work around this problem, write a dispatcher script. For an example of such a script, see the corresponding comment in the ticket. ( BZ#1920398 ) The nm-cloud-setup service removes manually-configured secondary IP addresses from interfaces Based on the information received from the cloud environment, the nm-cloud-setup service configures network interfaces. Disable nm-cloud-setup to manually configure interfaces. However, in certain cases, other services on the host can configure interfaces as well. For example, these services could add secondary IP addresses. To avoid that nm-cloud-setup removes secondary IP addresses: Stop and disable the nm-cloud-setup service and timer: Display the available connection profiles: Reactive the affected connection profiles: As a result, the service no longer removes manually-configured secondary IP addresses from interfaces. ( BZ#2132754 ) Systems with the IPv6_rpfilter option enabled experience low network throughput Systems with the IPv6_rpfilter option enabled in the firewalld.conf file currently experience suboptimal performance and low network throughput in high traffic scenarios, such as 100-Gbps links. To work around the problem, disable the IPv6_rpfilter option. To do so, add the following line in the /etc/firewalld/firewalld.conf file. As a result, the system performs better, but also has reduced security. (BZ#1871860) RoCE interfaces on IBM Z lose their IP settings due to an unexpected change of the network interface name In RHEL 8.6 and earlier, the udev device manager assigns on the IBM Z platform unpredictable device names to RoCE interfaces that are enumerated by a unique identifier (UID). However, in RHEL 8.7 and later, udev assigns predictable device names with the eno prefix to these interfaces. If you update from RHEL 8.6 or earlier to 8.7 or later, these UID-enumerated interfaces have new names and no longer match the device names in NetworkManager connection profiles. Consequently, these interfaces have no IP configuration after the update. For workarounds you can apply before the update and a fix if you have already updated the system, see RoCE interfaces on IBM Z lose their IP settings after updating to RHEL 8.7 or later . ( BZ#2169382 ) 11.8. Kernel Secure boot on IBM Power Systems does not support migration Currently, on IBM Power Systems, logical partition (LPAR) does not boot after successful physical volume (PV) migration. As a consequence, any type of automated migration with secure boot enabled on a partition fails. (BZ#2126777) Using page_poison=1 can cause a kernel crash When using page_poison=1 as the kernel parameter on firmware with faulty EFI implementation, the operating system can cause the kernel to crash. By default, this option is disabled and it is not recommended to enable it, especially in production systems. (BZ#2050411) weak-modules from kmod fails to work with module inter-dependencies The weak-modules script provided by the kmod package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules script fails to work in this scenario. To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel. (BZ#2103605) Reloading an identical crash extension may cause segmentation faults When you load a copy of an already loaded crash extension file, it might trigger a segmentation fault. Currently, the crash utility detects if an original file has been loaded. Consequently, due to two identical files co-existing in the crash utility, a namespace collision occurs, which triggers the crash utility to cause a segmentation fault. You can work around the problem by loading the crash extension file only once. As a result, segmentation faults no longer occur in the described scenario. ( BZ#1906482 ) vmcore capture fails after memory hot-plug or unplug operation After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet: A little-endian variant of IBM Power System runs RHEL 8. The kdump or fadump service is enabled on the system. Consequently, the capture kernel fails to save vmcore if a kernel crash is triggered after the memory hot-plug or hot-unplug operation. To work around this problem, restart the kdump service after hot-plug or hot-unplug: As a result, vmcore is successfully saved in the described scenario. (BZ#1793389) Debug kernel fails to boot in crash capture environment on RHEL 8 Due to the memory-intensive nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel and a stack trace is generated instead. To work around this problem, increase the crash kernel memory as required. As a result, the debug kernel boots successfully in the crash capture environment. (BZ#1659609) Allocating crash kernel memory fails at boot time On some Ampere Altra systems, allocating the crash kernel memory during boot fails when the 32-bit region is disabled in BIOS settings. Consequently, the kdump service fails to start. This is caused by memory fragmentation in the region below 4 GB with no fragment being large enough to contain the crash kernel memory. To work around this problem, enable the 32-bit memory region in BIOS as follows: Open the BIOS settings on your system. Open the Chipset menu. Under Memory Configuration , enable the Slave 32-bit option. As a result, crash kernel memory allocation within the 32-bit region succeeds and the kdump service works as expected. (BZ#1940674) The QAT manager leaves no spare device for LKCF The Intel(R) QuickAssist Technology (QAT) manager ( qatmgr ) is a user space process, which by default uses all QAT devices in the system. As a consequence, there are no QAT devices left for the Linux Kernel Cryptographic Framework (LKCF). There is no need to work around this situation, as this behavior is expected and a majority of users will use acceleration from the user space. (BZ#1920086) The kernel ACPI driver reports it has no access to a PCIe ECAM memory region The Advanced Configuration and Power Interface (ACPI) table provided by firmware does not define a memory region on the PCI bus in the Current Resource Settings (_CRS) method for the PCI bus device. Consequently, the following warning message occurs during the system boot: However, the kernel is still able to access the 0x30000000-0x31ffffff memory region, and can assign that memory region to the PCI Enhanced Configuration Access Mechanism (ECAM) properly. You can verify that PCI ECAM works correctly by accessing the PCIe configuration space over the 256 byte offset with the following output: As a result, you can ignore the warning message. For more information about the problem, see the "Firmware Bug: ECAM area mem 0x30000000-0x31ffffff not reserved in ACPI namespace" appears during system boot solution. (BZ#1868526) The tuned-adm profile powersave command causes the system to become unresponsive Executing the tuned-adm profile powersave command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave profile if your system matches the mentioned specifications. (BZ#1609288) The HP NMI watchdog does not always generate a crash dump In certain cases, the hpwdt driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. The missing NMI is initiated by one of two conditions: The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user. The hpwdt watchdog. The expiration by default sends an NMI to the server. Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic() function and if configured, the kdump service generates a vmcore file. Because of the missing NMI, however, kernel panic() is not called and vmcore is not collected. In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server. In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR). The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency. (BZ#1602962) Using irqpoll causes vmcore generation failure Due to an existing problem with the nvme driver on the 64-bit ARM architecture that run on the Amazon Web Services Graviton 1 processor, causes vmcore generation to fail when you provide the irqpoll kernel command line parameter to the first kernel. Consequently, no vmcore file is dumped in the /var/crash/ directory upon a kernel crash. To work around this problem: Append irqpoll to KDUMP_COMMANDLINE_REMOVE variable in the /etc/sysconfig/kdump file. Remove irqpoll from KDUMP_COMMANDLINE_APPEND variable in the /etc/sysconfig/kdump file. Restart the kdump service: As a result, the first kernel boots correctly and the vmcore file is expected to be captured upon the kernel crash. Note that the Amazon Web Services Graviton 2 and Amazon Web Services Graviton 3 processors do not require you to manually remove the irqpoll parameter in the /etc/sysconfig/kdump file. The kdump service can use a significant amount of crash kernel memory to dump the vmcore file. Ensure that the capture kernel has sufficient memory available for the kdump service. For related information on this Known Issue, see The irqpoll kernel command line parameter might cause vmcore generation failure article. (BZ#1654962) Connections fail when attaching a virtual function to virtual machine Pensando network cards that use the ionic device driver silently accept VLAN tag configuration requests and attempt configuring network connections while attaching network virtual functions ( VF ) to a virtual machine ( VM ). Such network connections fail as this feature is not yet supported by the card's firmware. (BZ#1930576) The OPEN MPI library may trigger run-time failures with default PML In OPEN Message Passing Interface (OPEN MPI) implementation 4.0.x series, Unified Communication X (UCX) is the default point-to-point communicator (PML). The later versions of OPEN MPI 4.0.x series deprecated openib Byte Transfer Layer (BTL). However, OPEN MPI, when run over a homogeneous cluster (same hardware and software configuration), UCX still uses openib BTL for MPI one-sided operations. As a consequence, this may trigger execution errors. To work around this problem: Run the mpirun command using following parameters: where, The -mca btl openib parameter disables openib BTL The -mca pml ucx parameter configures OPEN MPI to use ucx PML. The x UCX_NET_DEVICES= parameter restricts UCX to use the specified devices The OPEN MPI, when run over a heterogeneous cluster (different hardware and software configuration), it uses UCX as the default PML. As a consequence, this may cause the OPEN MPI jobs to run with erratic performance, unresponsive behavior, or crash failures. To work around this problem, set the UCX priority as: Run the mpirun command using following parameters: As a result, the OPEN MPI library is able to choose an alternative available transport layer over UCX. (BZ#1866402) The Solarflare fails to create maximum number of virtual functions (VFs) The Solarflare NICs fail to create a maximum number of VFs due to insufficient resources. You can check the maximum number of VFs that a PCIe device can create in the /sys/bus/pci/devices/PCI_ID/sriov_totalvfs file. To workaround this problem, you can either adjust the number of VFs or the VF MSI interrupt value to a lower value, either from Solarflare Boot Manager on startup, or using Solarflare sfboot utility. The default VF MSI interrupt value is 8 . To adjust the VF MSI interrupt value using sfboot : Note Adjusting VF MSI interrupt value affects the VF performance. For more information about parameters to be adjusted accordingly, see the Solarflare Server Adapter user guide . (BZ#1971506) The iwl7260-firmware breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4 After updating the iwl7260-firmware or iwl7260-wifi driver to the version provided by RHEL 8.7 and/or RHEL 9.1 (and later), the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards may not work and display the error message: An unconfirmed work around is to power off the system and back on again. Do not reboot. (BZ#2106341) Memory allocation for kdump fails on the 64-bit ARM architectures On certain 64-bit ARM based systems, the firmware uses the non-contiguous memory allocation method, which reserves memory randomly at different scattered locations. Consequently, due to the unavailability of consecutive blocks of memory, the crash kernel cannot reserve memory space for the kdump mechanism. To work around this problem, install the kernel version provided by RHEL 8.8 and later. The latest version of RHEL supports the fallback dump capture mechanism that helps to find a suitable memory region in the described scenario. ( BZ#2214235 ) Hardware certification of the real-time kernel on systems with large core-counts might require passing the skew-tick=1 boot parameter to avoid lock contentions Large or moderate sized systems with numerous sockets and large core-counts can experience latency spikes due to lock contentions on xtime_lock , which is used in the timekeeping system. As a consequence, latency spikes and delays in hardware certifications might occur on multiprocessing systems. As a workaround, you can offset the timer tick per CPU to start at a different time by adding the skew_tick=1 boot parameter. To avoid lock conflicts, enable skew_tick=1 : Enable the skew_tick=1 parameter with grubby . Reboot for changes to take effect. Verify the new settings by running the cat /proc/cmdline command. Note that enabling skew_tick=1 causes a significant increase in power consumption and, therefore, it must be enabled only if you are running latency sensitive real-time workloads. (BZ#2214508) 11.9. Boot loader The behavior of grubby diverges from its documentation When you add a new kernel using the grubby tool and do not specify any arguments, grubby passes the default arguments to the new entry. This behavior occurs even without passing the --copy-default argument. Using --args and --copy-default options ensures those arguments are appended to the default arguments as stated in the grubby documentation. However, when you add additional arguments, such as USDtuned_params , the grubby tool does not pass these arguments unless the --copy-default option is invoked. In this situation, two workarounds are available: Either set the root= argument and leave --args empty: Or set the root= argument and the specified arguments, but not the default ones: ( BZ#1900829 ) 11.10. File systems and storage Limitations of LVM writecache The writecache LVM caching method has the following limitations, which are not present in the cache method: You cannot name a writecache logical volume when using pvmove commands. You cannot use logical volumes with writecache in combination with thin pools or VDO. The following limitation also applies to the cache method: You cannot resize a logical volume while cache or writecache is attached to it. (JIRA:RHELPLAN-27987, BZ#1798631 , BZ#1808012) XFS quota warnings are triggered too often Using the quota timer results in quota warnings triggering too often, which causes soft quotas to be enforced faster than they should. To work around this problem, do not use soft quotas, which will prevent triggering warnings. As a result, the amount of warning messages will not enforce soft quota limit anymore, respecting the configured timeout. (BZ#2059262) LVM mirror devices that store a LUKS volume sometimes become unresponsive Mirrored LVM devices with a segment type of mirror that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations. To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror if you need to stack LUKS volumes on top of resilient software-defined storage. The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 device . (BZ#1730502) The /boot file system cannot be placed on LVM You cannot place the /boot file system on an LVM logical volume. This limitation exists for the following reasons: On EFI systems, the EFI System Partition conventionally serves as the /boot file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition. RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the /boot file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the /boot configuration defined by the uEFI standard. The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS. Red Hat does not plan to support /boot on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot file system to be placed on an LVM logical volume. (BZ#1496229) LVM no longer allows creating volume groups with mixed block sizes LVM utilities such as vgcreate or vgextend no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size. To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1 option in the lvm.conf file. ( BZ#1768536 ) The blk-availability systemd service deactivates complex device stacks In systemd , the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command: As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages. (BZ#2011699) VDO driver bug can cause device freezes through journal blocks While tracking a device-mapper suspend operation, a bug in the VDO driver causes the system to mark some journal blocks as waiting for metadata updates. The updates already apply since the suspend call. When the journal wraps around back to the same physical block, the block stops being available. Eventually, all writes stop until the block is available again. The growPhysical , growLogical , and setWritePolicy operations on VDO devices include a suspend/resume cycle, which can lead to the device freezing after a number of journal updates. Increasing the size of the VDO pool or the logical volume on top of it or using the pvmove and lvchange operations on LVM tools managed VDO devices can also trigger this problem. For a workaround, change the VDO device settings in any way that involves a suspend/resume cycle, shut down the VDO device completely and then start it again. This clears the incorrect in-memory state and resets the journal blocks. As a result, the device is not frozen anymore and works correctly. ( BZ#2109047 ) System hangs due to soft lockup while starting a VDO volume Due to fixing the kernel ABI breakage in the pv_mmu_ops structure, RHEL 8.7 systems with kernel version 4.18.0-425.10.1.el8_7 , that is RHEL-8.7.0.2-BaseOS, hang or encounter a kernel panic due to soft lockup while starting a Virtual Data Optimizer (VDO) volume. To work around this issue, disable any enabled VDO volumes before booting into kernel-4.18.0-425.10.1.el8_7 to prevent system hangs, or downgrade to the version of the kernel, which is 4.18.0-425.3.1.el8 , to retain VDO functionality. ( BZ#2158783 ) 11.11. Dynamic programming languages, web and database servers getpwnam() might fail when called by a 32-bit application When a user of NIS uses a 32-bit application that calls the getpwnam() function, the call fails if the nss_nis.i686 package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686 command. ( BZ#1803161 ) PAM plug-in version 1.0 does not work in MariaDB MariaDB 10.3 provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. MariaDB 10.5 provides the plug-in versions 1.0 and 2.0, version 2.0 is the default. The MariaDB PAM plug-in version 1.0 does not work in RHEL 8. To work around this problem, use the PAM plug-in version 2.0 provided by the mariadb:10.5 module stream. ( BZ#1942330 ) Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might terminate unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. Since the RHEL 8.3 update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. (BZ#1819607) 11.12. Identity Management Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. ( BZ#1729215 ) The /var/log/lastlog sparse file on IdM hosts can cause performance problems During the IdM installation, a range of 200,000 UIDs from a total of 10,000 possible ranges is randomly selected and assigned. Selecting a random range in this way significantly reduces the probability of conflicting IDs in case you decide to merge two separate IdM domains in the future. However, having high UIDs can create problems with the /var/log/lastlog file. For example, if a user with the UID of 1280000008 logs in to an IdM client, the local /var/log/lastlog file size increases to almost 400 GB. Although the actual file is sparse and does not use all that space, certain applications are not designed to identify sparse files by default and may require a specific option to handle them. For example, if the setup is complex and a backup and copy application does not handle sparse files correctly, the file is copied as if its size was 400 GB. This behavior can cause performance problems. To work around this problem: In case of a standard package, refer to its documentation to identify the option that handles sparse files. In case of a custom application, ensure that it is able to manage sparse files such as /var/log/lastlog correctly. (JIRA:RHELPLAN-59111) FIPS mode does not support using a shared secret to establish a cross-forest trust Establishing a cross-forest trust using a shared secret fails in FIPS mode because NTLMSSP authentication is not FIPS-compliant. To work around this problem, authenticate with an Active Directory (AD) administrative account when establishing a trust between an IdM domain with FIPS mode enabled and an AD domain. ( BZ#1924707 ) FreeRADIUS server fails to run in FIPS mode By default, in FIPS mode, OpenSSL disables the use of the MD5 digest algorithm. As the RADIUS protocol requires MD5 to encrypt a secret between the RADIUS client and the RADIUS server, this causes the FreeRADIUS server to fail in FIPS mode. To work around this problem, follow these steps: Procedure Create the environment variable, RADIUS_MD5_FIPS_OVERRIDE for the radiusd service: To apply the change, reload the systemd configuration and start the radiusd service: To run FreeRADIUS in debug mode: Note that though FreeRADIUS can run in FIPS mode, this does not mean that it is FIPS compliant as it uses weak ciphers and functions when in FIPS mode. For more information on configuring FreeRADIUS authentication in FIPS mode, see How to configure FreeRADIUS authentication in FIPS mode . ( BZ#1958979 ) IdM to AD cross-realm TGS requests fail The Privilege Attribute Certificate (PAC) information in IdM Kerberos tickets is now signed with AES SHA-2 HMAC encryption, which is not supported by Active Directory (AD). Consequently, IdM to AD cross-realm TGS requests, that is, two-way trust setups, are failing with the following error: ( BZ#2125182 ) Migrated IdM users might be unable to log in due to mismatching domain SIDs If you have used the ipa migrate-ds script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs . (JIRA:RHELPLAN-109613) IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate. ( BZ#2120572 ) IdM Vault encryption and decryption fails in FIPS mode The OpenSSL RSA-PKCS1v15 padding encryption is blocked if FIPS mode is enabled. Consequently, Identity Management (IdM) Vaults fail to work correctly as IdM is currently using the PKCS1v15 padding for wrapping the session key with the transport certificate. ( BZ#2122919 ) Actions required when running Samba as a print server and updating from RHEL 8.4 and earlier With this update, the samba package no longer creates the /var/spool/samba/ directory. If you use Samba as a print server and use /var/spool/samba/ in the [printers] share to spool print jobs, SELinux prevents Samba users from creating files in this directory. Consequently, print jobs fail and the auditd service logs a denied message in /var/log/audit/audit.log . To avoid this problem after updating your system from 8.4 and earlier: Search the [printers] share in the /etc/samba/smb.conf file. If the share definition contains path = /var/spool/samba/ , update the setting and set the path parameter to /var/tmp/ . Restart the smbd service: If you newly installed Samba on RHEL 8.5 or later, no action is required. The default /etc/samba/smb.conf file provided by the samba-common package in this case already uses the /var/tmp/ directory to spool print jobs. (BZ#2009213) Downgrading authselect after the rebase to version 1.2.2 breaks system authentication The authselect package has been rebased to the latest upstream version 1.2.2 . Downgrading authselect is not supported and breaks system authentication for all users, including root . If you downgraded the authselect package to 1.2.1 or earlier, perform the following steps to work around this problem: At the GRUB boot screen, select Red Hat Enterprise Linux with the version of the kernel that you want to boot and press e to edit the entry. Type single as a separate word at the end of the line that starts with linux and press Ctrl+X to start the boot process. Upon booting in single-user mode, enter the root password. Restore authselect configuration using the following command: ( BZ#1892761 ) The default keyword for enabled ciphers in the NSS does not work in conjunction with other ciphers In Directory Server you can use the default keyword to refer to the default ciphers enabled in the network security services (NSS). However, if you want to enable the default ciphers and additional ones using the command line or web console, Directory Server fails to resolve the default keyword. As a consequence, the server enables only the additionally specified ciphers and logs an error similar to the following: As a workaround, specify all ciphers that are enabled by default in NSS including the ones you want to additionally enable. ( BZ#1817505 ) pki-core-debuginfo update from RHEL 8.6 to RHEL 8.7 fails Updating the pki-core-debuginfo package from RHEL 8.6 to RHEL 8.7 fails. To work around this problem, run the following commands: yum remove pki-core-debuginfo yum update -y yum install pki-core-debuginfo yum install idm-pki-symkey-debuginfo idm-pki-tools-debuginfo ( BZ#2134093 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 11.13. Desktop Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. ( BZ#1668760 ) Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 or later as the host. (BZ#1583445) Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. ( BZ#1717947 ) 11.14. Graphics infrastructures radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will work successfully. (BZ#1694705) Multiple HDR displays on a single MST topology may not power on On systems using NVIDIA Turing GPUs with the nouveau driver, using a DisplayPort hub (such as a laptop dock) with multiple monitors which support HDR plugged into it may result in failure to turn on. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays. (BZ#1812577) GUI in ESXi might crash due to low video memory The graphical user interface (GUI) on RHEL virtual machines (VMs) in the VMware ESXi 7.0.1 hypervisor with vCenter Server 7.0.1 requires a certain amount of video memory. If you connect multiple consoles or high-resolution monitors to the VM, the GUI requires at least 16 MB of video memory. If you start the GUI with less video memory, the GUI might terminate unexpectedly. To work around the problem, configure the hypervisor to assign at least 16 MB of video memory to the VM. As a result, the GUI on the VM no longer crashes. If you encounter this issue, Red Hat recommends that you report it to VMware. See also the following VMware article: VMs with high resolution VM console may experience a crash on ESXi 7.0.1 (83194) . (BZ#1910358) VNC Viewer displays wrong colors with the 16-bit color depth on IBM Z The VNC Viewer application displays wrong colors when you connect to a VNC session on an IBM Z server with the 16-bit color depth. To work around the problem, set the 24-bit color depth on the VNC server. With the Xvnc server, replace the -depth 16 option with -depth 24 in the Xvnc configuration. As a result, VNC clients display the correct colors but use more network bandwidth with the server. ( BZ#1886147 ) Unable to run graphical applications using sudo command When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland is restricted by the Xauthority file to use regular user credentials for authentication. To work around this problem, use the sudo -E command to run graphical applications as a root user. ( BZ#1673073 ) Hardware acceleration is not supported on ARM Built-in graphics drivers do not support hardware acceleration or the Vulkan API on the 64-bit ARM architecture. To enable hardware acceleration or Vulkan on ARM, install the proprietary Nvidia driver. (JIRA:RHELPLAN-57914) Matrox G200e shows no output on a VGA display Your display might show no graphical output if you use the following system configuration: The Matrox G200e GPU A display connected over the VGA controller As a consequence, you cannot use or install RHEL on this configuration. To work around the problem, use the following procedure: Boot the system to the boot loader menu. Add the module_blacklist=mgag200 option to the kernel command line. As a result, RHEL boots and shows graphical output as expected, but the maximum resolution is limited to 1024x768 at the 16-bit color depth. (BZ#2130159) 11.15. The web console VNC console works incorrectly at certain resolutions When using the Virtual Network Computing (VNC) console under certain display resolutions, you might experience a mouse offset issue or you might see only a part of the interface. Consequently, using the VNC console might not be possible. To work around this issue, you can try expanding the size of the VNC console or use the Desktop Viewer in the Console tab to launch the remote viewer instead. ( BZ#2030836 ) 11.16. Red Hat Enterprise Linux system roles Unable to manage localhost by using the localhost hostname in the playbook or inventory With the inclusion of the ansible-core 2.13 package in RHEL, if you are running Ansible on the same host you manage your nodes, you cannot do it by using the localhost hostname in your playbook or inventory. This happens because ansible-core 2.13 uses the python38 module, and many of the libraries are missing, for example, blivet for the storage role, gobject for the network role. To workaround this problem, if you are already using the localhost hostname in your playbook or inventory, you can add a connection, by using ansible_connection=local , or by creating an inventory file that lists localhost with the ansible_connection=local option. With that, you are able to manage resources on localhost . For more details, see the article RHEL system roles playbooks fail when run on localhost . ( BZ#2041997 ) 11.17. Virtualization Using a large number of queues might cause Windows virtual machines to fail Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues. This problem is caused by a limitation in the vTPM device. The vTPM device has a hardcoded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail. To work around this problem, choose one of the following two options: Keep the vTPM device enabled, but use less than 250 queues. Disable the vTPM device to use more than 250 queues. ( BZ#2020133 ) The Milan VM CPU type is sometimes not available on AMD Milan systems On certain AMD Milan systems, the Enhanced REP MOVSB ( erms ) and Fast Short REP MOVSB ( fsrm ) feature flags are disabled in the BIOS by default. Consequently, the Milan CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host. (BZ#2077770) Attaching LUN devices to virtual machines using virtio-blk does not work The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller. Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk' option rather than device='lun' . (BZ#1777138) Virtual machines with iommu_platform=on fail to start on IBM POWER RHEL 8 currently does not support the iommu_platform=on parameter for virtual machines (VMs) on IBM POWER system. As a consequence, starting a VM with this parameter on IBM POWER hardware results in the VM becoming unresponsive during the boot process. ( BZ#1910848 ) IBM POWER hosts may crash when using the ibmvfc driver When running RHEL 8 on a PowerVM logical partition (LPAR), a variety of errors may currently occur due problems with the ibmvfc driver. As a consequence, the host's kernel may panic under certain circumstances, such as: Using the Live Partition Mobility (LPM) feature Resetting a host adapter Using SCSI error handling (SCSI EH) functions (BZ#1961722) Using perf kvm record on IBM POWER Systems can cause the VM to crash When using a RHEL 8 host on the little-endian variant of IBM POWER hardware, using the perf kvm record command to collect trace event samples for a KVM virtual machine (VM) in some cases results in the VM becoming unresponsive. This situation occurs when: The perf utility is used by an unprivileged user, and the -p option is used to identify the VM - for example perf kvm record -e trace_cycles -p 12345 . The VM was started using the virsh shell. To work around this problem, use the perf kvm utility with the -i option to monitor VMs that were created using the virsh shell. For example: Note that when using the -i option, child tasks do not inherit counters, and threads will therefore not be monitored. (BZ#1924016) Windows Server 2016 virtual machines with Hyper-V enabled fail to boot when using certain CPU models Currently, it is not possible to boot a virtual machine (VM) that uses Windows Server 2016 as the guest operating system, has the Hyper-V role enabled, and uses one of the following CPU models: EPYC-IBPB EPYC To work around this problem, use the EPYC-v3 CPU model, or manually enable the xsaves CPU flag for the VM. (BZ#1942888) Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a Migration status: active status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. (BZ#1741436) Using virt-customize sometimes causes guestfs-firstboot to fail After modifying a virtual machine (VM) disk image using the virt-customize utility, the guestfs-firstboot service in some cases fails due to incorrect SELinux permissions. This causes a variety of problems during VM startup, such as failing user creation or system registration. To avoid this problem, add --selinux-relabel to the kernel command line of the VM after modifying its disk image with virt-customize . ( BZ#1554735 ) Deleting a forward interface from a macvtap virtual network resets all connection counts of this network Currently, deleting a forward interface from a macvtap virtual network with multiple forward interfaces also resets the connection status of the other forward interfaces of the network. As a consequence, the connection information in the live network XML is incorrect. Note, however, that this does not affect the functionality of the virtual network. To work around the issue, restart the libvirtd service on your host. ( BZ#1332758 ) Virtual machines with SLOF fail to boot in netcat interfaces When using a netcat ( nc ) interface to access the console of a virtual machine (VM) that is currently waiting at the Slimline Open Firmware (SLOF) prompt, the user input is ignored and VM stays unresponsive. To work around this problem, use the nc -C option when connecting to the VM, or use a telnet interface instead. (BZ#1974622) Attaching mediated devices to virtual machines in virt-manager in some cases fails The virt-manager application is currently able to detect mediated devices, but cannot recognize whether the device is active. As a consequence, attempting to attach an inactive mediated device to a running virtual machine (VM) using virt-manager fails. Similarly, attempting to create a new VM that uses an inactive mediated device fails with a device not found error. To work around this issue, use the virsh nodedev-start or mdevctl start commands to activate the mediated device before using it in virt-manager . ( BZ#2026985 ) RHEL 9 virtual machines fail to boot in POWER8 compatibility mode Currently, booting a virtual machine (VM) that runs RHEL 9 as its guest operating system fails if the VM also uses CPU configuration similar to the following: To work around this problem, do not use POWER8 compatibility mode in RHEL 9 VMs. In addition, note that running RHEL 9 VMs is not possible on POWER8 hosts. ( BZ#2035158 ) Restarting the OVS service on a host might block network connectivity on its running VMs When the Open vSwitch (OVS) service restarts or crashes on a host, virtual machines (VMs) that are running on this host cannot recover the state of the networking device. As a consequence, VMs might be completely unable to receive packets. This problem only affects systems that use the packed virtqueue format in their virtio networking stack. To work around this problem, use the packed=off parameter in the virtio networking device definition to disable packed virtqueue. With packed virtqueue disabled, the state of the networking device can, in some situations, be recovered from RAM. ( BZ#1792683 ) Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. ( BZ#1719687 ) SUID and SGID are not cleared automatically on virtiofs When you run the virtiofsd service with the killpriv_v2 feature, your system may not automatically clear the SUID and SGID permissions after performing some file-system operations. Consequently, not clearing the permissions might cause a potential security threat. To work around this issue, disable the killpriv_v2 feature by entering the following command: (BZ#1966475) SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough. ( BZ#1740002 ) 11.18. RHEL in cloud environments Setting static IP in a RHEL virtual machine on a VMware host does not work Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. ( BZ#1750862 ) kdump sometimes does not start on Azure and Hyper-V On RHEL 8 guest operating systems hosted on the Microsoft Azure or Hyper-V hypervisors, starting the kdump kernel in some cases fails when post-exec notifiers are enabled. To work around this problem, disable crash kexec post notifiers: (BZ#1865745) The SCSI host address sometimes changes when booting a Hyper-V VM with multiple guest disks Currently, when booting a RHEL 8 virtual machine (VM) on the Hyper-V hypervisor, the host portion of the Host, Bus, Target, Lun (HBTL) SCSI address in some cases changes. As a consequence, automated tasks set up with the HBTL SCSI identification or device node in the VM do not work consistently. This occurs if the VM has more than one disk or if the disks have different sizes. To work around the problem, modify your kickstart files, using one of the following methods: Method 1: Use persistent identifiers for SCSI devices. You can use for example the following powershell script to determine the specific device identifiers: You can use this script on the hyper-v host, for example as follows: Afterwards, the disk values can be used in the kickstart file, for example as follows: As these values are specific for each virtual disk, the configuration needs to be done for each VM instance. It may, therefore, be useful to use the %include syntax to place the disk information into a separate file. Method 2: Set up device selection by size. A kickstart file that configures disk selection based on size must include lines similar to the following: (BZ#1906870) RHEL instances on Azure fail to boot if provisioned by cloud-init and configured with an NFSv3 mount entry Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab file. (BZ#2081114) 11.19. Supportability The getattachment command fails to download multiple attachments at once The redhat-support-tool command offers the getattachment subcommand for downloading attachments. However, getattachment is currently only able to download a single attachment and fails to download multiple attachments. As a workaround, you can download multiple attachments one by one by passing the case number and UUID for each attachment in the getattachment subcommand. ( BZ#2064575 ) redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. ( BZ#1802026 ) Timeout when running sos report on IBM Power Systems, Little Endian When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin's timeout accordingly: For one-time setting, run: For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file: The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin's timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command: (BZ#2011413) 11.20. Containers Running systemd within an older container image does not work Running systemd within an older container image, for example, centos:7 , does not work: To work around this problem, use the following commands: (JIRA:RHELPLAN-96940)
[ "%pre wipefs -a /dev/sda %end", "The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.", "ipmitool -I lanplus -H _myserver.example.com_ -P _mypass_ -C 3 chassis power status", "wipefs -a /dev/sda[1-9] /dev/sda", "/usr/bin/rsync -a --delete --filter '-x system.*' / 192.0.2.2::some/test/dir/ ERROR: rejecting excluded file-list name: path/to/excluded/system.mwmrc rsync error: protocol incompatibility (code 2) at flist.c(912) [receiver=3.1.3] rsync error: protocol incompatibility (code 2) at io.c(1649) [generator=3.1.3])", "yum module enable libselinux-python yum install libselinux-python", "yum module install libselinux-python:2.8/common", "update-crypto-policies --set DEFAULT:NO-CAMELLIA", "app pkcs15-init { framework pkcs15 { use_file_caching = false; } }", "package xorg-x11-server-common has been added to the list of excluded packages, but it can't be removed from the current software selection without breaking the installation.", "Title: Set SSH Client Alive Count Max to zero CCE Identifier: CCE-83405-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0 STIG ID: RHEL-08-010200 Title: Set SSH Idle Timeout Interval CCE Identifier: CCE-80906-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout STIG ID: RHEL-08-010201", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL", "systemctl disable --now nm-cloud-setup.service nm-cloud-setup.timer", "nmcli connection show", "nmcli connection up \"<profile_name>\"", "IPv6_rpfilter=no", "systemctl restart kdump.service", "[ 2.817152] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 0x30000000-0x31ffffff] not reserved in ACPI namespace [ 2.827911] acpi PNP0A08:00: ECAM at [mem 0x30000000-0x31ffffff] for [bus 00-1f]", "03:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express]) Capabilities: [900 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us", "KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\"", "KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory udev.children-max=2 panic=10 swiotlb=noforce novmcoredd\"", "systemctl restart kdump", "-mca btl openib -mca pml ucx -x UCX_NET_DEVICES=mlx5_ib0", "-mca pml_ucx_priority 5", "sfboot vf-msix-limit=2", "kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110 kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms) kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110", "grubby --update-kernel=ALL --args=\"skew_tick=1\"", "grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args \"root=/dev/mapper/rhel-root\" --title \"entry_with_root_set\"", "grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args \"root=/dev/mapper/rhel-root some_args and_some_more\" --title \"entry_with_root_set_and_other_args_too\"", "systemctl enable --now blk-availability.service", "systemctl edit radiusd [Service] Environment=RADIUS_MD5_FIPS_OVERRIDE=1", "systemctl daemon-reload systemctl start radiusd", "RADIUS_MD5_FIPS_OVERRIDE=1 radiusd -X", "\"Generic error (see e-text) while getting credentials for <service principal>\"", "systemctl restart smbd", "authselect select sssd --force", "Security Initialization - SSL alert: Failed to set SSL cipher preference information: invalid ciphers <default,+cipher_name>: format is +cipher1,-cipher2... (Netscape Portable Runtime error 0 - no error)", "The guest operating system reported that it failed with the following error code: 0x1E", "dracut_args --omit-drivers \"radeon\" force_rebuild 1", "perf kvm record -e trace_imc/trace_cycles/ -p <guest pid> -i", "<cpu mode=\"host-model\"> <model>power8</model> </cpu>", "virtiofsd -o no_killpriv_v2", "echo N > /sys/module/kernel/parameters/crash_kexec_post_notifiers", "Output what the /dev/disk/by-id/<value> for the specified hyper-v virtual disk. Takes a single parameter which is the virtual disk file. Note: kickstart syntax works with and without the /dev/ prefix. param ( [Parameter(Mandatory=USDtrue)][string]USDvirtualdisk ) USDwhat = Get-VHD -Path USDvirtualdisk USDpart = USDwhat.DiskIdentifier.ToLower().split('-') USDp = USDpart[0] USDs0 = USDp[6] + USDp[7] + USDp[4] + USDp[5] + USDp[2] + USDp[3] + USDp[0] + USDp[1] USDp = USDpart[1] USDs1 = USDp[2] + USDp[3] + USDp[0] + USDp[1] [string]::format(\"/dev/disk/by-id/wwn-0x60022480{0}{1}{2}\", USDs0, USDs1, USDpart[4])", "PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_8.vhdx /dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4 PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_9.vhdx /dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2", "part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=/dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2 part /home --fstype=\"xfs\" --grow --ondisk=/dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4", "Disk partitioning information is supplied in a file to kick start %include /tmp/disks Partition information is created during install using the %pre section %pre --interpreter /bin/bash --log /tmp/ks_pre.log # Dump whole SCSI/IDE disks out sorted from smallest to largest ouputting # just the name disks=(`lsblk -n -o NAME -l -b -x SIZE -d -I 8,3`) || exit 1 # We are assuming we have 3 disks which will be used # and we will create some variables to represent d0=USD{disks[0]} d1=USD{disks[1]} d2=USD{disks[2]} echo \"part /home --fstype=\"xfs\" --ondisk=USDd2 --grow\" >> /tmp/disks echo \"part swap --fstype=\"swap\" --ondisk=USDd0 --size=4096\" >> /tmp/disks echo \"part / --fstype=\"xfs\" --ondisk=USDd1 --grow\" >> /tmp/disks echo \"part /boot --fstype=\"xfs\" --ondisk=USDd1 --size=1024\" >> /tmp/disks %end", "sos report -k processor.timeout=1800", "Specify any plugin options and their values here. These options take the form plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800", "time sos report -o processor -k processor.timeout=0 --batch --build", "podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.", "mkdir /sys/fs/cgroup/systemd mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.7_release_notes/known-issues
6.5. Audit Logging
6.5. Audit Logging Audit logging captures important security events, including the enforcement of permissions, and authentication success/failure. See Red Hat JBoss Data Virtualization Development Guide: Server Development for information on developing a custom logging solution if file based (or any other built-in log4j) logging is not sufficient.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/audit_logging
8.3. Automatically Creating Dual Entries
8.3. Automatically Creating Dual Entries Some clients and integration with Red Hat Directory Server require dual entries. For example, both Posix systems typically have a group for each user. The Directory Server's Managed Entries Plug-in creates a new managed entry, with accurate and specific values for attributes, automatically whenever an appropriate origin entry is created. 8.3.1. About Managed Entries The basic idea behind the Managed Entries Plug-in is that there are situations when Entry A is created and there should automatically be an Entry B with related attribute values. For example, when a Posix user ( posixAccount entry) is created, a corresponding group entry ( posixGroup entry) should also be created. An instance of the Managed Entries Plug-in identifies what entry (the origin entry ) triggers the plug-in to automatically generate a new entry (the managed entry ). The plug-in works within a defined scope of the directory tree, so only entries within that subtree and that match the given search filter trigger a Managed Entries operation. Much like configuring a class of service, a managed entry is configured through two entries: A definition entry, that identifies the scope of the plug-in instance and the template to use A template entry, that models what the final managed entry will look like 8.3.1.1. About the Instance Definition Entry As with the Linked Attributes and DNA Plug-ins, the Managed Entries Plug-in has a container entry in cn=plugins,cn=config , and each unique configuration instance of the plug-in has a definition entry beneath that container. An instance of the Managed Entries Plug-in defines three things: The search criteria to identify the origin entries (using a search scope and a search filter) The subtree under which to create the managed entries (the new entry location) The template entry to use for the managed entries Figure 8.2. Defining Managed Entries For example: The origin entry does not have to have any special configuration or settings to create a managed entry; it simply has to be created within the scope of the plug-in and match the given search filter. 8.3.1.2. About the Template Entry Each instance of the plug-in uses a template entry which defines the managed entry configuration. The template effectively lays out the entry, from the object classes to the entry values. Note Since the template is referenced in the definition entry, it can be located anywhere in the directory. However, it is recommended that the template entry be under the replicated suffix so that any other suppliers in multi-supplier replication all use the same template for their local instances of the Managed Entries Plug-in. The concept of a template entry is similar to the templates used in CoS, but there are some important differences. The managed entry template is slightly different than the type of template used for a class of service. For a class of service, the template contains a single attribute with a specific value that is fed into all of the entries which belong to that CoS. Any changes to the class of service are immediately reflected in the associated entries, because the CoS attributes in those entries are virtual attributes, not truly attributes set on the entry. The template entry for the Managed Entries Plug-in, on the other hand, is not a central entry that supplies values to associated entries. It is a true template - it lays out what is in the entry. The template entry can contain both static attributes (ones with pre-defined values, similar to a CoS) and mapped attributes (attributes that pull their values or parts of values from the origin entry). The template is referenced when the managed entry is created and then any changes are applied to the managed entry only when the origin entry is changed and the template is evaluated again by the plug-in to apply those updates. Figure 8.3. Templates, Managed Entries, and Origin Entries The template can provide a specific value for an attribute in the managed entry by using a static attribute in the template. The template can also use a value that is derived from some attribute in the origin entry, so the value may be different from entry to entry; that is a mapped attribute, because it references the attribute type in the origin entry, not a value. A mapped value use a combination of token (dynamic values) and static values, but it can only use one token in a mapped attribute . The mapped attributes in the template use tokens, prepended by a dollar sign (USD), to pull in values from the origin entry and use it in the managed entry. (If a dollar sign is actually in the managed attribute value, then the dollar sign can be escaped by using two dollar signs in a row.) A mapped attribute definition can be quoted with curly braces, such as Attr: USD{cn}test . Quoting a token value is not required if the token name is not immediately followed by a character that is valid in an attribute name, such as a space or comma. For example, USDcn test is acceptable in an attribute definition because a space character immediately follow the attribute name, but USDcntest is not valid because the Managed Entries Plug-in attempts to look for an attribute named cntest in the origin entry. Using curly braces identifies the attribute token name. Note Make sure that the values given for static and mapped attributes comply with the required attribute syntax. 8.3.1.3. Entry Attributes Written by the Managed Entries Plug-in Both the origin entry and the managed entry have special managed entries attributes which indicate that they are being managed by an instance of the Managed Entries Plug-in. For the origin entry, the plug-in adds links to associated managed entries. On the managed entry, the plug-in adds attributes that point back to the origin entry, in addition to the attributes defined in the template. Using special attributes to indicate managed and origin entries makes it easy to identify the related entries and to assess changes made by the Managed Entries Plug-in. 8.3.1.4. Managed Entries Plug-in and Directory Server Operations The Managed Entries Plug-in has some impact on how the Directory Server carries out common operations, like add and delete operations. Table 8.3. Managed Entries Plug-in and Directory Server Operations Operation Effect by the Managed Entries Plug-in Add With every add operation, the server checks to see if the new entry is within the scope of any Managed Entries Plug-in instance. If it meets the criteria for an origin entry, then a managed entry is created and managed entry-related attributes are added to both the origin and managed entry. Modify If an origin entry is modified, it triggers the plug-in to update the managed entry. Changing a template entry, however, does not update the managed entry automatically. Any changes to the template entry are not reflected in the managed entry until after the time the origin entry is modified. The mapped managed attributes within a managed entry cannot be modified manually, only by the Managed Entry Plug-in. Other attributes in the managed entry (including static attributes added by the Managed Entry Plug-in) can be modified manually. Delete If an origin entry is deleted, then the Managed Entries Plug-in will also delete any managed entry associated with that entry. There are some limits on what entries can be deleted. A template entry cannot be deleted if it is currently referenced by a plug-in instance definition. A managed entry cannot be deleted except by the Managed Entries Plug-in. Rename If an origin entry is renamed, then plug-in updates the corresponding managed entry. If the entry is moved out of the plug-in scope, then the managed entry is deleted, while if an entry is moved into the plug-in scope, it is treated like an add operation and a new managed entry is created. As with delete operations, there are limits on what entries can be renamed or moved. A configuration definition entry cannot be moved out of the Managed Entries Plug-in container entry. If the entry is removed, that plug-in instance is inactivated. If an entry is moved into the Managed Entries Plug-in container entry, then it is validated and treated as an active configuration definition. A template entry cannot be renamed or moved if it is currently referenced by a plug-in instance definition. A managed entry cannot be renamed or moved except by the Managed Entries Plug-in. Replication The Managed Entries Plug-in operations are not initiated by replication updates . If an add or modify operation for an entry in the plug-in scope is replicated to another replica, that operation does not trigger the Managed Entries Plug-in instance on the replica to create or update an entry. The only way for updates for managed entries to be replicated is to replicate the final managed entry over to the replica. 8.3.2. Creating the Managed Entries Template Entry The first entry to create is the template entry. The template entry must contain all of the configuration required for the generated, managed entry. This is done by setting the attribute-value assertions in static and mapped attributes in the template: The static attributes set an explicit value; mapped attributes pull some value from the originating entry is used to supply the given attribute. The values of these attributes will be tokens in the form attribute: USDattr . As long as the syntax of the expanded token of the attribute does not violate the required attribute syntax, then other terms and strings can be used in the attribute. For example: There are some syntax rules that must be followed for the managed entries: A mapped value use a combination of token (dynamic values) and static values, but it can only use one token per mapped attribute . The mapped attributes in the template use tokens, prepended by a dollar sign (USD), to pull in values from the origin entry and use it in the managed entry. (If a dollar sign is actually in the managed attribute value, then the dollar sign can be escaped by using two dollar signs in a row.) A mapped attribute definition can be quoted with curly braces, such as Attr: USD{cn}test . Quoting a token value is not required if the token name is not immediately followed by a character that is valid in an attribute name, such as a space or comma. For example, USDcn test is acceptable in an attribute definition because a space character immediately follow the attribute name, but USDcntest is not valid because the Managed Entries Plug-in attempts to look for an attribute named cntest in the origin entry. Using curly braces identifies the attribute token name. Make sure that the values given for static and mapped attributes comply with the required attribute syntax. Note Make sure that the values given for static and mapped attributes comply with the required attribute syntax. For example, if one of the mapped attributes is gidNumber , then the mapped value should be an integer. Table 8.4. Attributes for the Managed Entry Template Attribute Description mepTemplateEntry (object class) Identifies the entry as a template. cn Gives the common name of the entry. mepMappedAttr Contains an attribute-token pair that the plug-in uses to create an attribute in the managed entry with a value taken from the originating entry. mepRDNAttr Specifies which attribute to use as the naming attribute in the managed entry. The attribute used as the RDN must be a mapped attribute for the configuration to be valid. mepStaticAttr Contains an attribute-value pair that will be used, with that specified value, in the managed entry. To create a template entry: Use the dsconf plugin managed-entries template add command to add the template entry. For example: 8.3.3. Creating the Managed Entries Instance Definition Once the template entry is created, then it is possible to create a definition entry that points to that template. The definition entry is an instance of the Managed Entries Plug-in. Note When the definition is created, the server checks to see if the specified template entry exists. If the template does not exist, then the server returns a warning that the definition configuration is invalid. The definition entry must define the parameters to recognize the potential origin entry and the information to create the managed entry. The attributes available for the plug-in instance are listed in Table 8.5, "Attributes for the Managed Entries Definition Entry" . Table 8.5. Attributes for the Managed Entries Definition Entry Attribute Name Description originFilter The search filter to use to search for and identify the entries within the subtree which require a managed entry. The syntax is the same as a regular search filter. originScope The base subtree which contains the potential origin entries for the plug-in to monitor. managedTemplate Identifies the template entry to use to create the managed entry. This entry can be located anywhere in the directory tree. managedBase The subtree under which to create the managed entries. Note The Managed Entries Plug-in is enabled by default. If this plug-in is disabled, then re-enable it as described in Section 1.10.2, "Enabling and Disabling Plug-ins" . To create an instance: Create the new plug-in instance below the cn=Managed Entries,cn=plugins,cn=config container entry. For example: This command sets the scope and filter for the origin entry search, the location of the new managed entries, and the template entry to use. If the Directory Server is not configured to enable dynamic plug-ins, restart the server to load the modified new plug-in instance: 8.3.4. Putting Managed Entries Plug-in Configuration in a Replicated Database As Section 8.3.1, "About Managed Entries" highlights, different instances of the Managed Entries Plug-in are created as children beneath the container plug-in entry in cn=plugins,cn=com . (This is common for plug-ins which allow multiple instances.) The drawback to this is that the configuration entries in cn=plugins,cn=com are not replicated, so the configuration has to be re-created on each Directory Server instance. The Managed Entries Plug-in entry allows the nsslapd-pluginConfigArea attribute. This attribute to another container entry, in the main database area, which contains the plug-in instance entries. This container entry can be in a replicated database, which allows the plug-in configuration to be replicated. Create a container entry. For example, to create an entry that points back to the container entry, enter: Move or create the definition ( Section 8.3.3, "Creating the Managed Entries Instance Definition" ) and template ( Section 8.3.2, "Creating the Managed Entries Template Entry" ) entries under the new container entry.
[ "dn: cn=Posix User-Group,cn=Managed Entries,cn=plugins,cn=config objectclass: extensibleObject cn: Posix User-Group originScope: ou=people,dc=example,dc=com originFilter: objectclass=posixAccount managedBase: ou=groups,dc=example,dc=com managedTemplate: cn=Posix User-Group Template,ou=Templates,dc=example,dc=com", "dn: cn=Posix User-Group Template,ou=Templates,dc=example,dc=com objectclass: mepTemplateEntry cn: Posix User-Group Template mepRDNAttr: cn mepStaticAttr: objectclass: posixGroup mepMappedAttr: cn: USDcn Group Entry mepMappedAttr: gidNumber: USDgidNumber mepMappedAttr: memberUid: USDuid", "dn: uid=jsmith,ou=people,dc=example,dc=com objectclass: mepOriginEntry objectclass: posixAccount sn: Smith mail: [email protected] mepManagedEntry: cn=jsmith Posix Group,ou=groups,dc=example,dc=com", "dn: cn=jsmith Posix Group,ou=groups,dc=example,dc=com objectclass: mepManagedEntry objectclass: posixGroup mepManagedBy: uid=jsmith,ou=people,dc=example,dc=com", "mepStaticAttr: attribute: specific_value mepMappedAttr: attribute: USDtoken_value", "mepMappedAttr: cn: Managed Group for USDcn", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin managed-entries template \" cn=Posix User Template,ou=templates,dc=example,dc=com \" add --rdn-attr \" cn \" --static-attr \" objectclass: posixGroup \" --mapped-attr \" cn: USDcn Group Entry\" \"gidNumber: USDgidNumber\" \"memberUid: USDuid \"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin managed-entries config \"cn=instance,cn=Managed Entries,cn=plugins,cn=config\" add --scope=\"ou=people,dc=example,dc=com\" --filter=\"objectclass=posixAccount\" --managed-base=\"ou=groups,dc=example,dc=com\" --managed-template=\"cn=Posix User-Group Template,ou=Templates,dc=example,dc=com\"", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin managed-entries set --config-area=\" cn=managed entries container,ou=containers,dc=example,dc=com \"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/using-managed-entries
Chapter 4. Configuration information for Red Hat Quay
Chapter 4. Configuration information for Red Hat Quay Checking a configuration YAML can help identify and resolve various issues related to the configuration of Red Hat Quay. Checking the configuration YAML can help you address the following issues: Incorrect Configuration Parameters : If the database is not functioning as expected or is experiencing performance issues, your configuration parameters could be at fault. By checking the configuration YAML, administrators can ensure that all the required parameters are set correctly and match the intended settings for the database. Resource Limitations : The configuration YAML might specify resource limits for the database, such as memory and CPU limits. If the database is running into resource constraints or experiencing contention with other services, adjusting these limits can help optimize resource allocation and improve overall performance. Connectivity Issues : Incorrect network settings in the configuration YAML can lead to connectivity problems between the application and the database. Ensuring that the correct network configurations are in place can resolve issues related to connectivity and communication. Data Storage and Paths : The configuration YAML may include paths for storing data and logs. If the paths are misconfigured or inaccessible, the database may encounter errors while reading or writing data, leading to operational issues. Authentication and Security : The configuration YAML may contain authentication settings, including usernames, passwords, and access controls. Verifying these settings is crucial for maintaining the security of the database and ensuring only authorized users have access. Plugin and Extension Settings : Some databases support extensions or plugins that enhance functionality. Issues may arise if these plugins are misconfigured or not loaded correctly. Checking the configuration YAML can help identify any problems with plugin settings. Replication and High Availability Settings : In clustered or replicated database setups, the configuration YAML may define replication settings and high availability configurations. Incorrect settings can lead to data inconsistency and system instability. Backup and Recovery Options : The configuration YAML might include backup and recovery options, specifying how data backups are performed and how data can be recovered in case of failures. Validating these settings can ensure data safety and successful recovery processes. By checking your configuration YAML, Red Hat Quay administrators can detect and resolve these issues before they cause significant disruptions to the application or service relying on the database. 4.1. Obtaining configuration information for Red Hat Quay Configuration information can be obtained for all types of Red Hat Quay deployments, include standalone, Operator, and geo-replication deployments. Obtaining configuration information can help you resolve issues with authentication and authorization, your database, object storage, and repository mirroring. After you have obtained the necessary configuration information, you can update your config.yaml file, search the Red Hat Knowledgebase for a solution, or file a support ticket with the Red Hat Support team. Procedure To obtain configuration information on Red Hat Quay Operator deployments, you can use oc exec , oc cp , or oc rsync . To use the oc exec command, enter the following command: USD oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml This command returns your config.yaml file directly to your terminal. To use the oc copy command, enter the following commands: USD oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml To display this information in your terminal, enter the following command: USD cat /tmp/config.yaml To use the oc rsync command, enter the following commands: oc rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml Example output DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us To obtain configuration information on standalone Red Hat Quay deployments, you can use podman cp or podman exec . To use the podman copy command, enter the following commands: USD podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml To use podman exec , enter the following commands: USD podman exec -it <quay_container_id> cat /conf/stack/config.yaml Example output BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} --- 4.2. Obtaining database configuration information You can obtain configuration information about your database by using the following procedure. Warning Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist. Procedure If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command: USD oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf If you are using a standalone deployment of Red Hat Quay, enter the following command: USD podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf
[ "oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml", "oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml", "cat /tmp/config.yaml", "rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/", "cat /tmp/local_directory/config.yaml", "DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us", "podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/", "cat /tmp/local_directory/config.yaml", "podman exec -it <quay_container_id> cat /conf/stack/config.yaml", "BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} ---", "oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf", "podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/troubleshooting_red_hat_quay/obtaining-quay-config-information
21.3. Booleans
21.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: selinuxuser_postgresql_connect_enabled Having this Boolean enabled allows any user domain (as defined by PostgreSQL) to make connections to the database server. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work.
[ "~]USD getsebool -a | grep service_name", "~]USD sepolicy booleans -b boolean_name" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-postgresql-booleans
20.41. Configuring the Guest Virtual Machine CPU Model
20.41. Configuring the Guest Virtual Machine CPU Model For simple defaults, the guest virtual machine CPU configuration accepts the same basic XML representation as the host physical machine capabilities XML exposes. In other words, the XML from the virsh cpu-baseline command can now be copied directly into the guest virtual machine XML at the top level under the domain element. In the XML snippet, there are a few extra attributes available when describing a CPU in the guest virtual machine XML. These can mostly be ignored, but for the curious here is a quick description of what they do. The top level <cpu> element has an attribute called match with possible values of: match='minimum' - the host physical machine CPU must have at least the CPU features described in the guest virtual machine XML. If the host physical machine has additional features beyond the guest virtual machine configuration, these will also be exposed to the guest virtual machine. match='exact' - the host physical machine CPU must have at least the CPU features described in the guest virtual machine XML. If the host physical machine has additional features beyond the guest virtual machine configuration, these will be masked out from the guest virtual machine. match='strict' - the host physical machine CPU must have exactly the same CPU features described in the guest virtual machine XML. The enhancement is that the <feature> elements can each have an extra 'policy' attribute with possible values of: policy='force' - expose the feature to the guest virtual machine even if the host physical machine does not have it. This is usually only useful in the case of software emulation. Note It is possible that even using the force policy, the hypervisor may not be able to emulate the particular feature. policy='require' - expose the feature to the guest virtual machine and fail if the host physical machine does not have it. This is the sensible default. policy='optional' - expose the feature to the guest virtual machine if it happens to support it. policy='disable' - if the host physical machine has this feature, then hide it from the guest virtual machine. policy='forbid' - if the host physical machine has this feature, then fail and refuse to start the guest virtual machine. The 'forbid' policy is for a niche scenario where an incorrectly functioning application will try to use a feature even if it is not in the CPUID mask, and you wish to prevent accidentally running the guest virtual machine on a host physical machine with that feature. The 'optional' policy has special behavior with respect to migration. When the guest virtual machine is initially started the flag is optional, but when the guest virtual machine is live migrated, this policy turns into 'require', since you cannot have features disappearing across migration.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-configuring_the_guest_virtual_machine_cpu_model
Chapter 2. Before You Begin
Chapter 2. Before You Begin 2.1. Update the target server The target server contains the JBoss EAP Migration Tool and includes the latest bug fixes for the tool. You can use the JBoss EAP Migration Tool to migrate from one minor release of JBoss EAP to another minor release. Before the migration process, you must ensure the JBoss EAP Migration Tool receives the latest JBoss EAP updates to prevent re-introducing bugs that might already be fixed for the tool. You can update the tool by applying the latest JBoss EAP updates to the target server. For example, if you want to migrate your existing source server configuration from JBoss EAP 6.4 to JBoss EAP 7.4 then you must apply the latest JBoss EAP Migration Tool updates to JBoss EAP 7.4 before you can run the tool. Otherwise, after you migrate from JBoss EAP 6.4 to JBoss EAP 7.4, you might introduce issues to the new source server configuration. Note Releases before JBoss EAP 6.4 do not support the JBoss EAP Migration Tool. If you want to use the tool with JBoss EAP 6.4 then you must upgrade to JBoss EAP 6.4. Further, you must copy the source configuration files from JBoss EAP 6.0 to JBoss EAP 6.4. Additional resources For information about how to upgrade your server configuration, see the Patching and Upgrading Guide for JBoss EAP. 2.2. Run With a Clean Target Server Installation Because the JBoss Server Migration Tool creates the configuration files based on the configuration of a release, it is intended to be run on a clean and unconfigured target server installation. The JBoss Server Migration Tool creates a backup of the target server's configuration files by appending .beforeMigration to the file names. It then creates totally new configuration files for the target server using the source server's configuration files, and migrates the configuration to run in target server configuration. Warning When you run the JBoss Server Migration Tool, all changes on the target server made between installation and running the migrate tool are lost. Also, be aware that if you run the tool against the target server directory more than once, the subsequent runs will overwrite the original target configuration files that were backed up on the first run of the tool. This is because each run of the tool backs up the configuration files by appending .beforeMigration , resulting in the loss of any existing backed up configuration files. 2.3. Customize the Migration The JBoss Server Migration Tool provides the ability to configure logging, reporting, and the execution of migration tasks. By default, when you run the JBoss Server Migration Tool in non-interactive mode, it migrates the entire server configuration. You can configure the JBoss Server Migration Tool to customize logging and reporting output. You can also configure it to skip any part of the configuration that you do not want to migrate. For instructions on how to configure properties to control the migration process, see Configuring the JBoss Server Migration Tool .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_the_jboss_server_migration_tool/server_migration_tool_server_prerequisites
Chapter 65. ListenerStatus schema reference
Chapter 65. ListenerStatus schema reference Used in: KafkaStatus Property Description type The type property has been deprecated, and should now be configured using name . The name of the listener. string name The name of the listener. string addresses A list of the addresses for this listener. ListenerAddress array bootstrapServers A comma-separated list of host:port pairs for connecting to the Kafka cluster using this listener. string certificates A list of TLS certificates which can be used to verify the identity of the server when connecting to the given listener. Set only for tls and external listeners. string array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ListenerStatus-reference
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Red Hat OpenShift Data Foundation 4.9 supports deployment of Red Hat OpenShift on IBM Cloud clusters in connected environments.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_cloud/providing-feedback-on-red-hat-documentation_rhodf
26.3. Installation Network Parameters
26.3. Installation Network Parameters The following parameters can be used to set up the preliminary network automatically and can be defined in either the parameter file or the CMS configuration file. The parameters in this section are the only parameters that can also be used in a CMS configuration file. All other parameters in other sections must be specified in the parameter file. NETTYPE=" type " Where type must be one of the following: qeth , lcs , or ctc . The default is qeth . Choose lcs for: OSA-2 Ethernet/Token Ring OSA-Express Fast Ethernet in non-QDIO mode OSA-Express High Speed Token Ring in non-QDIO mode Gigabit Ethernet in non-QDIO mode Choose qeth for: OSA-Express Fast Ethernet Gigabit Ethernet (including 1000Base-T) High Speed Token Ring HiperSockets ATM (running Ethernet LAN emulation) SUBCHANNELS=" device_bus_IDs " Where bus_IDs is a comma-separated list of two or three device bus IDs. Provides required device bus IDs for the various network interfaces: For example (a sample qeth SUBCHANNEL statement): PORTNAME=" osa_portname " , PORTNAME=" lcs_portnumber " This variable supports OSA devices operating in qdio mode or in non-qdio mode. When using qdio mode ( NETTYPE="qeth" ), osa_portname is the portname specified on the OSA device when operating in qeth mode. When using non-qdio mode ( NETTYPE="lcs" ), lcs_portnumber is used to pass the relative port number as a decimal integer in the range of 0 through 15. PORTNO=" portnumber " You can add either PORTNO="0" (to use port 0) or PORTNO="1" (to use port 1 of OSA features with two ports per CHPID) to the CMS configuration file to avoid being prompted for the mode. LAYER2=" value " Where value can be 0 or 1 . Use LAYER2="0" to operate an OSA or HiperSockets device in layer 3 mode ( NETTYPE="qeth" ). Use LAYER2="1" for layer 2 mode. For virtual network devices under z/VM this setting must match the definition of the GuestLAN or VSWITCH to which the device is coupled. To use network services that operate on layer 2 (the Data Link Layer or its MAC sublayer) such as DHCP, layer 2 mode is a good choice. The qeth device driver default for OSA devices is now layer 2 mode. To continue using the default of layer 3 mode, set LAYER2="0" explicitly. VSWITCH=" value " Where value can be 0 or 1 . Specify VSWITCH="1" when connecting to a z/VM VSWITCH or GuestLAN, or VSWITCH="0" (or nothing at all) when using directly attached real OSA or directly attached real HiperSockets. MACADDR=" MAC_address " If you specify LAYER2="1" and VSWITCH="0" , you can optionally use this parameter to specify a MAC address. Linux requires six colon-separated octets as pairs lower case hex digits - for example, MACADDR=62:a3:18:e7:bc:5f . Note that this is different from the notation used by z/VM. If you specify LAYER2="1" and VSWITCH="1" , you must not specify the MACADDR , because z/VM assigns a unique MAC address to virtual network devices in layer 2 mode. CTCPROT=" value " Where value can be 0 , 1 , or 3 . Specifies the CTC protocol for NETTYPE="ctc" . The default is 0 . HOSTNAME=" string " Where string is the hostname of the newly-installed Linux instance. IPADDR=" IP " Where IP is the IP address of the new Linux instance. NETMASK=" netmask " Where netmask is the netmask. The netmask supports the syntax of a prefix integer (from 1 to 32) as specified in IPv4 classless interdomain routing (CIDR). For example, you can specify 24 instead of 255.255.255.0 , or 20 instead of 255.255.240.0 . GATEWAY=" gw " Where gw is the gateway IP address for this network device. MTU=" mtu " Where mtu is the Maximum Transmission Unit (MTU) for this network device. DNS=" server1 : server2 : additional_server_terms : serverN " Where " server1 : server2 : additional_server_terms : serverN " is a list of DNS servers, separated by colons. For example: SEARCHDNS=" domain1 : domain2 : additional_dns_terms : domainN " Where " domain1 : domain2 : additional_dns_terms : domainN " is a list of the search domains, separated by colons. For example: You only need to specify SEARCHDNS= if you specify the DNS= parameter. DASD= Defines the DASD or range of DASDs to configure for the installation. For a detailed description of the syntax, refer to the dasd_mod device driver module option described in the chapter on the DASD device driver in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 . Linuxrc supports a comma-separated list of device bus IDs or of ranges of device bus IDs with the optional attributes ro , diag , erplog , and failfast . Optionally, you can abbreviate device bus IDs to device numbers with leading zeros stripped. Any optional attributes should be separated by colons and enclosed in parentheses. Optional attributes follow a device bus ID or a range of device bus IDs. The only supported global option is autodetect . This does not support the specification of non-existent DASDs to reserve kernel device names for later addition of DASDs. Use persistent DASD device names (for example /dev/disk/by-path/... ) to enable transparent addition of disks later. Other global options such as probeonly , nopav , or nofcx are not supported by linuxrc. Only specify those DASDs that you really need to install your system. All unformatted DASDs specified here must be formatted after a confirmation later on in the installer (refer to Section 23.6.1.1, "DASD low-level formatting" ). Add any data DASDs that are not needed for the root file system or the /boot partition after installation as described in Section 25.1.3, "DASDs Which Are Not Part of the Root File System" . For FCP-only environments, specify DASD="none" . For example: FCP_ n =" device_bus_ID WWPN FCP_LUN " Where: n is typically an integer value (for example FCP_1 or FCP_2 ) but could be any string with alphabetic or numeric characters or underscores. device_bus_ID specifies the device bus ID of the FCP device representing the host bus adapter (HBA) (for example 0.0.fc00 for device fc00). WWPN is the world wide port name used for routing (often in conjunction with multipathing) and is as a 16-digit hex value (for example 0x50050763050b073d ). FCP_LUN refers to the storage logical unit identifier and is specified as a 16-digit hexadecimal value padded with zeroes to the right (for example 0x4020400100000000 ). These variables can be used on systems with FCP devices to activate FCP LUNs such as SCSI disks. Additional FCP LUNs can be activated during the installation interactively or by means of a kickstart file. There is no interactive question for FCP in linuxrc. An example value may look similar to the following: Important Each of the values used in the FCP parameters (for example FCP_1 or FCP_2 ) are site-specific and are normally supplied by the FCP storage administrator. The installation program prompts you for any required parameters not specified in the parameter or configuration file except for FCP_n.
[ "qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"", "SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"", "DNS=\"10.1.2.3:10.3.2.1\"", "SEARCHDNS=\"subdomain.domain:domain\"", "DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"", "FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-Installation_network_parameters
7.71. gnome-settings-daemon
7.71. gnome-settings-daemon 7.71.1. RHBA-2013:0312 - gnome-settings-daemon bug fix and enhancement update Updated gnome-settings-daemon packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The gnome-settings-daemon packages contain a daemon to share settings from GNOME with other applications. It also handles global key bindings, as well as a number of desktop-wide settings. Bug Fixes BZ#805064 Previously, the LED indicators of some Wacom graphics tablets were not supported in the gnome-settings-daemon package. Consequently, the status LEDs on Wacom tablets would not accurately indicate the current control mode. With this update, LED support has been added to gnome-settings-daemon. As a result, the tablet LEDs now work as epected. BZ#812363 Previously, using function keys without modifiers (F1, F2, and so on) as keyboard shortcuts for custom actions did not work. With this update, a patch has been added to fix this bug. As a result, gnome-settings-daemon now allows unmodified function keys to be used as keyboard shortcuts for custom actions. BZ# 824757 In certain cases, the gnome-settings-daemon did not properly handle the display configuration settings. Consequently, using the system's hot-key to change the display configuration either did not select a valid XRandR configuration or kept monitors in clone mode. This bug has been fixed and gnome-settings-daemon now selects valid XRandR modes and handles the clone mode as expected. BZ# 826128 Previously, connecting a screen tablet to a computer before activation of the tablet screen caused the input device to be matched with the only available monitor - the computer screen. Consequently, the stylus motions were incorrectly mapped to the computer screen instead of the tablet itself. With this update, a patch has been introduced to detect the tablet screen as soon as it becomes available. As a result, the device is correctly re-matched when the tablet screen is detected. BZ#839328 Previously, using the shift key within a predefined keyboard shortcut mapped to the tablet's ExpressKey button caused gnome-settings-daemon to crash after pressing ExpressKey. This bug has been fixed, and the shortcuts which use the shift key can now be mapped to ExpressKey without complications. BZ#853181 Prior to this update, the mouse plug-in in the gnome-settings-daemon package interfered with Wacom devices. Consequently, using ExpressKey on a tablet after hot-plugging generated mouse click events. With this update, the mouse plug-in has been fixed to ignore tablet devices and the interference no longer occurs. BZ# 886922 Previously, on tablets with multiple mode-switch buttons such as the Wacom Cintiq 24HD, all mode-switch buttons would cycle though the different modes. With this update, each different mode-switch button will select the right mode for the given button. BZ#861890 Due to a bug in the gnome settings daemon, changing the monitor layout led to incorrect tablet mapping. With this update, the graphics tablet mapping is automatically updated when the monitor layout is changed. As a result, the stylus movements are correctly mapped after the layout change and no manual update is needed. Enhancements BZ# 772728 With this update, several integration improvements for Wacom graphics tablets have been backported from upstream: - touchscreen devices are now automatically set in absolute mode instead of relative - memory leaks on tablet hot plug have been fixed - ExpressKeys no longer fail after the layout rotation - test applications are now included in the package to help with debugging issues. BZ#858255 With this update, the touch feature of input devices has been enabled in the default settings of gnome-settings-daemon. All users of gnome-settings-daemon are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gnome-settings-daemon
7.10. RHEA-2014:1519 - new package: ksc
7.10. RHEA-2014:1519 - new package: ksc A new ksc package is now available for Red Hat Enterprise Linux 6. The ksc package contains KSC, a kernel module source code checker to find usage of non-whitelist symbols. This enhancement update adds the ksc package to Red Hat Enterprise Linux 6. (BZ# 1085004 ) All users who require ksc are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1519
Chapter 155. KafkaRebalance schema reference
Chapter 155. KafkaRebalance schema reference Property Property type Description spec KafkaRebalanceSpec The specification of the Kafka rebalance. status KafkaRebalanceStatus The status of the Kafka rebalance.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkarebalance-reference
20.31. Deleting Storage Volumes
20.31. Deleting Storage Volumes The virsh vol-delete vol pool command deletes a given volume. The command requires a the name or UUID of the storage pool the volume is in as well as the name of the storage volume. In lieu of the volume name the key or path of the volume to delete may also be used. Example 20.91. How to delete a storage volume The following example deletes a storage volume named new-vol , which contains the storage pool vdisk :
[ "virsh vol-delete new-vol vdisk vol new-vol deleted" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-storage_volume_commands-deleting_storage_volumes
Chapter 3. Updating Red Hat build of OpenJDK 21 for Microsoft Windows using the archive
Chapter 3. Updating Red Hat build of OpenJDK 21 for Microsoft Windows using the archive Red Hat build of OpenJDK 21 for Microsoft Windows can be manually updated using the archive. Procedure Download the archive of Red Hat build of OpenJDK 21. Extract the contents of an archive to a directory of your choice. Note Extracting the contents of an archive to a directory path that does not contain spaces is recommended. On Command Prompt, update JAVA_HOME environment variable as follows: Open Command Prompt as an administrator. Set the value of the environment variable to your Red Hat build of OpenJDK 21 for Microsoft Windows installation path: If the path contains spaces, use the shortened path name. Restart Command Prompt to reload the environment variables. Set the value of PATH variable if it is not set already: Restart Command Prompt to reload the environment variables. Verify that java -version works without supplying the full path.
[ "C:\\> setx /m JAVA_HOME \"C:\\Progra~1\\RedHat\\java-21-openjdk-<version>\"", "C:\\> setx -m PATH \"%PATH%;%JAVA_HOME%\\bin\";", "C:\\> java -version" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/installing_and_using_red_hat_build_of_openjdk_21_for_windows/updating-openjdk-windows-using-archive
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/making-open-source-more-inclusive
Chapter 19. Upgrading an overcloud with external Ceph deployments
Chapter 19. Upgrading an overcloud with external Ceph deployments This scenario contains an example upgrade process for an overcloud environment with external Ceph deployments, which includes the following node types: Three Controller nodes External Ceph Storage cluster Multiple Compute nodes 19.1. Running the overcloud upgrade preparation The upgrade requires running openstack overcloud upgrade prepare command, which performs the following tasks: Updates the overcloud plan to OpenStack Platform 16.2 Prepares the nodes for the upgrade Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Run the upgrade preparation command: Include the following options relevant to your environment: The environment file ( upgrades-environment.yaml ) with the upgrade-specific parameters ( -e ). The environment file ( rhsm.yaml ) with the registration and subscription parameters ( -e ). The environment file ( containers-prepare-parameter.yaml ) with your new container image locations ( -e ). In most cases, this is the same environment file that the undercloud uses. The environment file ( neutron-ovs.yaml ) to maintain OVS compatibility. Any custom configuration environment files ( -e ) relevant to your deployment. If applicable, your custom roles ( roles_data ) file using --roles-file . If applicable, your composable network ( network_data ) file using --networks-file . If you use a custom stack name, pass the name with the --stack option. Wait until the upgrade preparation completes. Download the container images: 19.2. Upgrading Controller nodes with external Ceph deployments If you are upgrading with external Ceph deployments, you must complete this procedure. To upgrade all the Controller nodes to OpenStack Platform 16.2, you must upgrade each Controller node starting with the bootstrap Controller node. During the bootstrap Controller node upgrade process, a new Pacemaker cluster is created and new Red Hat OpenStack 16.2 containers are started on the node, while the remaining Controller nodes are still running on Red Hat OpenStack 13. After upgrading the bootstrap node, you must upgrade each additional node with Pacemaker services and ensure that each node joins the new Pacemaker cluster started with the bootstrap node. For more information, see Overcloud node upgrade workflow . In this example, the controller nodes are named using the default overcloud-controller- NODEID convention. This includes the following three controller nodes: overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 Substitute these values for your own node names where applicable. Procedure Source the stackrc file: Identify the bootstrap Controller node by running the following command on the undercloud node: Optional: Replace <stack_name> with the name of the stack. If not specified, the default is overcloud . Upgrade the bootstrap Controller node: Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Important The command causes an outage on the control plane. You cannot perform any standard operations on the overcloud during the few steps. Run the external upgrade command with the system_upgrade_transfer_data tag: This command copies the latest version of the database from an existing node to the bootstrap node. Run the upgrade command with the nova_hybrid_state tag and run only the upgrade_steps_playbook.yaml playbook: This command launches temporary 16.2 containers on Compute nodes to help facilitate workload migration when you upgrade Compute nodes at a later step. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. Important The control plane becomes active when this command finishes. You can perform standard operations on the overcloud again. Verify that after the upgrade, the new Pacemaker cluster is started and that the control plane services such as galera, rabbit, haproxy, and redis are running: Upgrade the Controller node: Verify that the old cluster is no longer running: An error similar to the following is displayed when the cluster is not running: Run the upgrade command with the system_upgrade tag on the Controller node: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. In addition to this node, include the previously upgraded bootstrap node in the --limit option. Upgrade the final Controller node: Verify that the old cluster is no longer running: An error similar to the following is displayed when the cluster is not running: Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. Include all Controller nodes in the --limit option. 19.3. Upgrading Compute nodes Upgrade all the Compute nodes to OpenStack Platform 16.2. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Migrate your instances. For more information on migration strategies, see Migrating virtual machines between Compute nodes . Run the upgrade command with the system_upgrade tag: This command performs the following actions: Performs a Leapp upgrade of the operating system. Performs a reboot as a part of the Leapp upgrade. Run the upgrade command with no tags: This command performs the Red Hat OpenStack Platform upgrade. To upgrade multiple Compute nodes in parallel, set the --limit option to a comma-separated list of nodes that you want to upgrade. First perform the system_upgrade task: Then perform the standard OpenStack service upgrade: 19.4. Synchronizing the overcloud stack The upgrade requires an update the overcloud stack to ensure that the stack resource structure and parameters align with a fresh deployment of OpenStack Platform 16.2. Note If you are not using the default stack name ( overcloud ), set your stack name with the --stack STACK NAME option replacing STACK NAME with the name of your stack. Procedure Source the stackrc file: Edit the containers-prepare-parameter.yaml file and remove the following parameters and their values: ceph3_namespace ceph3_tag ceph3_image name_prefix_stein name_suffix_stein namespace_stein tag_stein To re-enable fencing in your overcloud, set the EnableFencing parameter to true in the fencing.yaml environment file. Run the upgrade finalization command: Include the following options relevant to your environment: The environment file ( upgrades-environment.yaml ) with the upgrade-specific parameters ( -e ). The environment file ( fencing.yaml ) with the EnableFencing parameter set to true . The environment file ( rhsm.yaml ) with the registration and subscription parameters ( -e ). The environment file ( containers-prepare-parameter.yaml ) with your new container image locations ( -e ). In most cases, this is the same environment file that the undercloud uses. The environment file ( neutron-ovs.yaml ) to maintain OVS compatibility. Any custom configuration environment files ( -e ) relevant to your deployment. If applicable, your custom roles ( roles_data ) file using --roles-file . If applicable, your composable network ( network_data ) file using --networks-file . If you use a custom stack name, pass the name with the --stack option. Wait until the stack synchronization completes. Important You do not need the upgrades-environment.yaml file for any further deployment operations.
[ "source ~/stackrc", "openstack overcloud upgrade prepare --stack STACK NAME --templates -e ENVIRONMENT FILE ... -e /home/stack/templates/upgrades-environment.yaml -e /home/stack/templates/rhsm.yaml -e /home/stack/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml ...", "openstack overcloud external-upgrade run --stack STACK NAME --tags container_image_prepare", "source ~/stackrc", "tripleo-ansible-inventory --list [--stack <stack_name>] |jq .overcloud_Controller.hosts[0]", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-controller-0", "openstack overcloud external-upgrade run [--stack <stack_name>] --tags system_upgrade_transfer_data", "openstack overcloud upgrade run [--stack <stack_name>] --playbook upgrade_steps_playbook.yaml --tags nova_hybrid_state --limit all", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0", "sudo pcs status", "sudo pcs status", "Error: cluster is not currently running on this node", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-controller-1", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0,overcloud-controller-1", "sudo pcs status", "Error: cluster is not currently running on this node", "openstack overcloud upgrade run [--stack <stack_name>] --tags system_upgrade --limit overcloud-controller-2", "openstack overcloud upgrade run [--stack <stack_name>] --limit overcloud-controller-0,overcloud-controller-1,overcloud-controller-2", "source ~/stackrc", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-compute-0", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-compute-0", "openstack overcloud upgrade run --stack STACK NAME --tags system_upgrade --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2", "openstack overcloud upgrade run --stack STACK NAME --limit overcloud-compute-0,overcloud-compute-1,overcloud-compute-2", "source ~/stackrc", "openstack overcloud upgrade converge --stack STACK NAME --templates -e ENVIRONMENT FILE ... -e /home/stack/templates/upgrades-environment.yaml -e /home/stack/templates/rhsm.yaml -e /home/stack/containers-prepare-parameter.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml ..." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/upgrading-an-overcloud-with-external-ceph-deployments_upgrading-overcloud-external-ceph
Chapter 122. AclRuleClusterResource schema reference
Chapter 122. AclRuleClusterResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleClusterResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleTransactionalIdResource . It must have the value cluster for the type AclRuleClusterResource . Property Property type Description type string Must be cluster .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-AclRuleClusterResource-reference