title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 2. Package List | Chapter 2. Package List For information about package designation, see https://access.redhat.com/support/offerings/production/scope_moredetail . Table 2.1. Package List Package Name License ansible-collections-openstack "GPLv3+" ansible-config_template "ASL 2.0" ansible-pacemaker "ASL 2.0" ansible-role-atos-hsm "ASL 2.0" ansible-role-chrony "ASL 2.0" ansible-role-container-registry "ASL 2.0" ansible-role-lunasa-hsm "ASL 2.0" ansible-role-network-runner017 "ASL 2.0" ansible-role-openstack-ml2 "ASL 2.0" ansible-role-openstack-operations "ASL 2.0" ansible-role-redhat-subscription "ASL 2.0" ansible-role-thales-hsm "ASL 2.0" ansible-role-tripleo-modify-image "ASL 2.0" ansible-tripleo-ipa "ASL 2.0" ansible-tripleo-ipsec "GPLv3+" ansible-tripleo-powerflex "ASL 2.0" bootswatch-common "MIT" bootswatch-fonts "MIT" collectd "MIT and GPLv2" collectd-amqp1 "MIT and GPLv2" collectd-amqp "MIT and GPLv2" collectd-apache "MIT and GPLv2" collectd-bind "MIT and GPLv2" collectd-ceph "MIT and GPLv2" collectd-chrony "MIT and GPLv2" collectd-connectivity "MIT and GPLv2" collectd-curl "MIT and GPLv2" collectd-curl_json "MIT and GPLv2" collectd-curl_xml "MIT and GPLv2" collectd-dbi "MIT and GPLv2" collectd-disk "MIT and GPLv2" collectd-dns "MIT and GPLv2" collectd-dpdk_telemetry "MIT and GPLv2" collectd-generic-jmx "MIT and GPLv2" collectd-hugepages "MIT and GPLv2" collectd-ipmi "MIT and GPLv2" collectd-iptables "MIT and GPLv2" collectd-libpod-stats "MIT" collectd-log_logstash "MIT and GPLv2" collectd-mcelog "MIT and GPLv2" collectd-memcachec "MIT and GPLv2" collectd-mysql "MIT and GPLv2" collectd-netlink "MIT and GPLv2" collectd-openldap "MIT and GPLv2" collectd-ovs-events "MIT and GPLv2" collectd-ovs-stats "MIT and GPLv2" collectd-pcie-errors "MIT and GPLv2" collectd-ping "MIT and GPLv2" collectd-pmu "MIT and GPLv2" collectd-procevent "MIT and GPLv2" collectd-python "MIT and GPLv2" collectd-rdt "MIT and GPLv2" collectd-sensors "MIT and GPLv2" collectd-sensubility "ASL 2.0" collectd-smart "MIT and GPLv2" collectd-snmp "MIT and GPLv2" collectd-snmp-agent "MIT and GPLv2" collectd-sysevent "MIT and GPLv2" collectd-turbostat "MIT and GPLv2" collectd-utils "MIT and GPLv2" collectd-virt "MIT and GPLv2" collectd-write_http "MIT and GPLv2" collectd-write_kafka "MIT and GPLv2" collectd-write_prometheus "MIT and GPLv2" cpp-hocon "ASL 2.0" crudini "GPLv2" dibbler-client "GPLv2" dibbler-relay "GPLv2" dibbler-requestor "GPLv2" dibbler-server "GPLv2" dib-utils "ASL 2.0" diskimage-builder "ASL 2.0" double-conversion "BSD" dumb-init "MIT" elixir "ASL 2.0" erlang-asn1 "ASL 2.0" erlang-compiler "ASL 2.0" erlang-crypto "ASL 2.0" erlang-eldap "ASL 2.0" erlang-erts "ASL 2.0" erlang-hipe "ASL 2.0" erlang-inets "ASL 2.0" erlang-kernel "ASL 2.0" erlang-mnesia "ASL 2.0" erlang-os_mon "ASL 2.0" erlang-parsetools "ASL 2.0" erlang-public_key "ASL 2.0" erlang-runtime_tools "ASL 2.0" erlang-sasl "ASL 2.0" erlang-sd_notify "MIT" erlang-snmp "ASL 2.0" erlang-ssl "ASL 2.0" erlang-stdlib "ASL 2.0" erlang-syntax_tools "ASL 2.0" erlang-tools "ASL 2.0" erlang-xmerl "ASL 2.0" etcd "ASL 2.0" facter "ASL 2.0" fontawesome-fonts "OFL" fontawesome-fonts-web "OFL and MIT" gnocchi-api "ASL 2.0" gnocchi-common "ASL 2.0" gnocchi-metricd "ASL 2.0" gnocchi-statsd "ASL 2.0" golang-github-BurntSushi-toml-devel "BSD" golang-github-davecgh-go-spew-devel "ISC" golang-github-go-ini-ini-devel "ASL 2.0" golang-github-golang-sys-devel "BSD" golang-github-infrawatch-apputils "ASL 2.0" golang-github-pmezard-go-difflib-devel "BSD" golang-github-Sirupsen-logrus-devel "MIT" golang-github-streadway-amqp-devel "BSD" golang-github-stretchr-objx-devel "MIT" golang-github-stretchr-testify-devel "MIT" golang-github-urfave-cli-devel "MIT" golang-github-vbatts-tar-split "BSD" golang-golangorg-crypto-devel "BSD" golang-gopkg-check-devel "BSD" golang-gopkg-yaml-devel "LGPLv3 with exceptions" golang-gopkg-yaml-devel-v2 "LGPLv3 with exceptions" golang-qpid-apache "BSD and ASL 2.0" heat-cfntools "ASL 2.0" hiera "ASL 2.0" intel-cmt-cat "BSD" jevents "GPLv2 and BSD" kuryr-binding-scripts "ASL 2.0" leatherman "ASL 2.0 and MIT" libcollectdclient "MIT and GPLv2" libdbi "LGPLv2+" liberasurecode "BSD and CRC32" liboping "GPLv2" libsodium "ISC" libwebsockets "LGPLv2 and Public Domain and BSD and MIT and zlib" mdi-common "OFL" mdi-fonts "OFL" ndisc6 "GPLv2 or GPLv3" novnc "GPLv3" octavia-amphora-image-x86_64 "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" openstack-aodh-api "ASL 2.0" openstack-aodh-common "ASL 2.0" openstack-aodh-compat "ASL 2.0" openstack-aodh-evaluator "ASL 2.0" openstack-aodh-expirer "ASL 2.0" openstack-aodh-listener "ASL 2.0" openstack-aodh-notifier "ASL 2.0" openstack-barbican "ASL 2.0" openstack-barbican-api "ASL 2.0" openstack-barbican-common "ASL 2.0" openstack-barbican-keystone-listener "ASL 2.0" openstack-barbican-worker "ASL 2.0" openstack-ceilometer-central "ASL 2.0" openstack-ceilometer-common "ASL 2.0" openstack-ceilometer-compute "ASL 2.0" openstack-ceilometer-ipmi "ASL 2.0" openstack-ceilometer-notification "ASL 2.0" openstack-ceilometer-polling "ASL 2.0" openstack-cinder "ASL 2.0" openstack-dashboard "ASL 2.0 and BSD" openstack-dashboard-theme "ASL 2.0" openstack-designate-agent "ASL 2.0" openstack-designate-api "ASL 2.0" openstack-designate-central "ASL 2.0" openstack-designate-common "ASL 2.0" openstack-designate-mdns "ASL 2.0" openstack-designate-producer "ASL 2.0" openstack-designate-sink "ASL 2.0" openstack-designate-ui "ASL 2.0" openstack-designate-worker "ASL 2.0" openstack-ec2-api "ASL 2.0" openstack-glance "ASL 2.0" openstack-heat-agents "ASL 2.0" openstack-heat-api "ASL 2.0" openstack-heat-api-cfn "ASL 2.0" openstack-heat-common "ASL 2.0" openstack-heat-engine "ASL 2.0" openstack-heat-monolith "ASL 2.0" openstack-heat-ui "ASL 2.0" openstack-ironic-api "ASL 2.0" openstack-ironic-common "ASL 2.0" openstack-ironic-conductor "ASL 2.0" openstack-ironic-inspector "ASL 2.0" openstack-ironic-inspector-dnsmasq "ASL 2.0" openstack-ironic-python-agent "ASL 2.0" openstack-ironic-python-agent-builder "ASL 2.0" openstack-ironic-staging-drivers "ASL 2.0" openstack-ironic-ui "ASL 2.0" openstack-keystone "ASL 2.0" openstack-manila "ASL 2.0" openstack-manila-share "ASL 2.0" openstack-manila-ui "ASL 2.0" openstack-mistral-all "ASL 2.0" openstack-mistral-api "ASL 2.0" openstack-mistral-common "ASL 2.0" openstack-mistral-engine "ASL 2.0" openstack-mistral-event-engine "ASL 2.0" openstack-mistral-executor "ASL 2.0" openstack-mistral-notifier "ASL 2.0" openstack-neutron "ASL 2.0" openstack-neutron-bgp-dragent "ASL 2.0" openstack-neutron-bigswitch-agent "ASL 2.0" openstack-neutron-common "ASL 2.0" openstack-neutron-dynamic-routing-common "ASL 2.0" openstack-neutron-l2gw-agent "ASL 2.0" openstack-neutron-linuxbridge "ASL 2.0" openstack-neutron-macvtap-agent "ASL 2.0" openstack-neutron-metering-agent "ASL 2.0" openstack-neutron-ml2 "ASL 2.0" openstack-neutron-openvswitch "ASL 2.0" openstack-neutron-rpc-server "ASL 2.0" openstack-neutron-sriov-nic-agent "ASL 2.0" openstack-nova "ASL 2.0" openstack-nova-api "ASL 2.0" openstack-nova-common "ASL 2.0" openstack-nova-compute "ASL 2.0" openstack-nova-conductor "ASL 2.0" openstack-nova-console "ASL 2.0" openstack-nova-migration "ASL 2.0" openstack-nova-novncproxy "ASL 2.0" openstack-nova-scheduler "ASL 2.0" openstack-nova-serialproxy "ASL 2.0" openstack-nova-spicehtml5proxy "ASL 2.0" openstack-octavia-amphora-agent "ASL 2.0" openstack-octavia-api "ASL 2.0" openstack-octavia-common "ASL 2.0" openstack-octavia-diskimage-create "ASL 2.0" openstack-octavia-health-manager "ASL 2.0" openstack-octavia-housekeeping "ASL 2.0" openstack-octavia-ui "ASL 2.0" openstack-octavia-worker "ASL 2.0" openstack-panko-api "ASL 2.0" openstack-panko-common "ASL 2.0" openstack-placement-api "ASL 2.0" openstack-placement-common "ASL 2.0" openstack-selinux "GPLv2" openstack-swift-account "ASL 2.0" openstack-swift-container "ASL 2.0" openstack-swift-object "ASL 2.0" openstack-swift-proxy "ASL 2.0" openstack-tempest "ASL 2.0" openstack-tempest-all "ASL 2.0" openstack-tripleo-common "ASL 2.0" openstack-tripleo-common-container-base "ASL 2.0" openstack-tripleo-common-containers "ASL 2.0" openstack-tripleo-common-devtools "ASL 2.0" openstack-tripleo-heat-templates "ASL 2.0" openstack-tripleo-image-elements "ASL 2.0" openstack-tripleo-puppet-elements "ASL 2.0" openstack-tripleo-validations "ASL 2.0" openstack-zaqar "ASL 2.0" os-apply-config "ASL 2.0" os-collect-config "ASL 2.0" os-net-config "ASL 2.0" os-refresh-config "ASL 2.0" paunch-services "ASL 2.0" plotnetcfg "GPLv2+" pmu-data "GPLv2 and BSD" pmu-tools "GPLv2 and BSD" puppet "ASL 2.0" puppet-aodh "ASL 2.0" puppet-apache "ASL 2.0" puppet-archive "ASL 2.0" puppet-auditd "BSD" puppet-barbican "ASL 2.0" puppet-cassandra "ASL 2.0" puppet-ceilometer "ASL 2.0" puppet-ceph "ASL 2.0" puppet-certmonger "ASL 2.0" puppet-cinder "ASL 2.0" puppet-collectd "ASL 2.0" puppet-concat "ASL 2.0" puppet-contrail "ASL 2.0" puppet-corosync "ASL 2.0" puppet-datacat "ASL 2.0" puppet-designate "ASL 2.0" puppet-dns "ASL 2.0" puppet-ec2api "ASL 2.0" puppet-elasticsearch "ASL 2.0" puppet-etcd "ASL 2.0" puppet-fdio "ASL 2.0" puppet-firewall "ASL 2.0" puppet-git "ASL 2.0" puppet-glance "ASL 2.0" puppet-gnocchi "ASL 2.0" puppet-haproxy "ASL 2.0" puppet-headless "ASL 2.0" puppet-heat "ASL 2.0" puppet-horizon "ASL 2.0" puppet-inifile "ASL 2.0" puppet-ipaclient "MIT" puppet-ironic "ASL 2.0" puppet-java "ASL 2.0" puppet-kafka "ASL 2.0" puppet-keepalived "ASL 2.0" puppet-keystone "ASL 2.0" puppet-kibana3 "ASL 2.0" puppet-kmod "ASL 2.0" puppet-manila "ASL 2.0" puppet-memcached "ASL 2.0" puppet-midonet "ASL 2.0" puppet-mistral "ASL 2.0" puppet-module-data "ASL 2.0" puppet-mysql "ASL 2.0" puppet-n1k-vsm "ASL 2.0" puppet-neutron "ASL 2.0" puppet-nova "ASL 2.0" puppet-nssdb "ASL 2.0" puppet-octavia "ASL 2.0" puppet-opendaylight "BSD-2-Clause" puppet-openstack_extras "ASL 2.0" puppet-openstacklib "ASL 2.0" puppet-oslo "ASL 2.0" puppet-ovn "ASL 2.0" puppet-pacemaker "ASL 2.0" puppet-panko "ASL 2.0" puppet-placement "ASL 2.0" puppet-qdr "ASL 2.0" puppet-rabbitmq "ASL 2.0" puppet-redis "ASL 2.0" puppet-remote "ASL 2.0" puppet-rsync "ASL 2.0" puppet-rsyslog "ASL 2.0" puppet-sahara "ASL 2.0" puppet-server "ASL 2.0" puppet-snmp "ASL 2.0" puppet-ssh "ASL 2.0" puppet-staging "ASL 2.0" puppet-stdlib "ASL 2.0" puppet-swift "ASL 2.0" puppet-sysctl "GPLv2" puppet-systemd "ASL 2.0" puppet-timezone "ASL 2.0" puppet-tomcat "ASL 2.0" puppet-tripleo "ASL 2.0" puppet-trove "ASL 2.0" puppet-vcsrepo "GPLv2" puppet-veritas_hyperscale "ASL 2.0" puppet-vswitch "ASL 2.0" puppet-xinetd "ASL 2.0" puppet-zaqar "ASL 2.0" puppet-zookeeper "ASL 2.0" python3-adal "MIT" python3-alembic "MIT" python3-amqp "BSD" python3-aniso8601 "GPLv3+" python3-ansible-runner "ASL 2.0" python3-anyjson "BSD" python3-aodh "ASL 2.0" python3-aodhclient "ASL 2.0" python3-appdirs "MIT" python3-atomicwrites "MIT" python3-autobahn "MIT" python3-automaton "ASL 2.0" python3-barbican "ASL 2.0" python3-barbicanclient "ASL 2.0" python3-barbican-tests-tempest "ASL 2.0" python3-bcrypt "ASL 2.0 and Public Domain and BSD" python3-beautifulsoup4 "MIT" python3-boto "MIT" python3-boto3 "ASL 2.0" python3-botoCore "ASL 2.0" python3-cachetools "MIT" python3-castellan "ASL 2.0" python3-ceilometer "ASL 2.0" python3-ceilometermiddleware "ASL 2.0" python3-certifi "MPLv2.0" python3-cinder "ASL 2.0" python3-cinderclient "ASL 2.0" python3-cinderlib "ASL 2.0" python3-cinderlib-tests-functional "ASL 2.0" python3-cinder-tests-tempest "ASL 2.0" python3-cliff "ASL 2.0" python3-cmd2 "MIT" python3-collectd-gnocchi "ASL 2.0" python3-collectd-rabbitmq-monitoring "ASL 2.0" python3-colorama "BSD" python3-construct "MIT" python3-contextlib2 "Python" python3-cotyledon "ASL 2.0" python3-cradox "LGPLv2" python3-croniter "MIT" python3-crypto "Public Domain and Python" python3-cursive "ASL 2.0" python3-Cython "Python" python3-daemon "ASL 2.0" python3-daiquiri "ASL 2.0" python3-dateutil "BSD" python3-ddt "MIT" python3-debtcollector "ASL 2.0" python3-defusedxml "Python" python3-designate "ASL 2.0" python3-designateclient "ASL 2.0" python3-designate-tests-tempest "ASL 2.0" python3-dictdiffer "MIT" python3-django20 "BSD" python3-django-appconf "BSD" python3-django-compressor "MIT" python3-django-debreach "BSD" python3-django-horizon "ASL 2.0 and BSD" python3-django-pyscss "BSD" python3-dogpile-cache "MIT" python3-dracclient "ASL 2.0" python3-ec2-api "ASL 2.0" python3-editor "ASL 2.0" python3-etcd3gw "ASL 2.0" python3-eventlet "MIT" python3-extras "MIT" python3-falcon "ASL 2.0" python3-fasteners "ASL 2.0" python3-fixtures "ASL 2.0 or BSD" python3-flake8 "MIT" python3-flask "BSD" python3-flask-restful "BSD" python3-funcsigs "ASL 2.0" python3-future "MIT" python3-futurist "ASL 2.0" python3-gabbi "ASL 2.0" python3-gitdb "BSD" python3-GitPython "BSD" python3-glance "ASL 2.0" python3-glanceclient "ASL 2.0" python3-glance-store "ASL 2.0" python3-gnocchi "ASL 2.0" python3-gnocchiclient "ASL 2.0" python3-google-auth "ASL 2.0" python3-greenlet "MIT" python3-gunicorn "MIT" python3-hardware "ASL 2.0" python3-hardware-detect "ASL 2.0" python3-heat-agent "ASL 2.0" python3-heat-agent-ansible "ASL 2.0" python3-heat-agent-apply-config "ASL 2.0" python3-heat-agent-docker-cmd "ASL 2.0" python3-heat-agent-hiera "ASL 2.0" python3-heat-agent-json-file "ASL 2.0" python3-heat-agent-puppet "ASL 2.0" python3-heatclient "ASL 2.0" python3-heat-tests-tempest "ASL 2.0" python3-horizon-tests-tempest "ASL 2.0" python3-httplib2 "MIT" python3-ImcSdk "ASL 2.0" python3-importlib-metadata "ASL 2.0" python3-ironicclient "ASL 2.0" python3-ironic-inspector-client "ASL 2.0" python3-ironic-lib "ASL 2.0" python3-ironic-neutron-agent "ASL 2.0" python3-ironic-prometheus-exporter "ASL 2.0" python3-ironic-python-agent "ASL 2.0" python3-ironic-tests-tempest "ASL 2.0" python3-iso8601 "MIT" python3-json-logger "BSD" python3-jsonpath-rw "ASL 2.0" python3-jsonpath-rw-ext "ASL 2.0" python3-junitxml "LGPLv3" python3-kazoo "ASL 2.0" python3-kerberos "ASL 2.0" python3-keyring "MIT and Python" python3-keystone "ASL 2.0" python3-keystoneauth1 "ASL 2.0" python3-keystoneclient "ASL 2.0" python3-keystonemiddleware "ASL 2.0" python3-keystone-tests-tempest "ASL 2.0" python3-kombu "BSD and Python" python3-kubernetes "ASL 2.0" python3-kuryr-tests-tempest "ASL 2.0" python3-ldap3 "LGPLv2+" python3-ldappool "MPLv1.1 and GPLv2+ and LGPLv2+" python3-lesscpy "MIT" python3-linecache2 "Python" python3-lockfile "MIT" python3-logutils "BSD" python3-lz4 "BSD" python3-magnumclient "ASL 2.0" python3-manila "ASL 2.0" python3-manilaclient "ASL 2.0" python3-manila-tests-tempest "ASL 2.0" python3-markupsafe "BSD" python3-mccabe "MIT" python3-memcached "Python" python3-metalsmith "ASL 2.0" python3-microversion-parse "ASL 2.0" python3-migrate "MIT" python3-mimeparse "MIT" python3-mistral "ASL 2.0" python3-mistralclient "ASL 2.0" python3-mistral-lib "ASL 2.0" python3-mistral-tests-tempest "ASL 2.0" python3-mock "BSD" python3-monotonic "ASL 2.0" python3-more-itertools "MIT" python3-mox3 "ASL 2.0" python3-msgpack "ASL 2.0" python3-munch "MIT" python3-netifaces "MIT" python3-networking-ansible "ASL 2.0" python3-networking-baremetal "ASL 2.0" python3-networking-bgpvpn "ASL 2.0" python3-networking-bgpvpn-dashboard "ASL 2.0" python3-networking-bgpvpn-heat "ASL 2.0" python3-networking-bigswitch "ASL 2.0" python3-networking-l2gw "ASL 2.0" python3-networking-l2gw-tests-tempest "ASL 2.0" python3-networking-ovn "ASL 2.0" python3-networking-ovn-metadata-agent "ASL 2.0" python3-networking-ovn-migration-tool "ASL 2.0" python3-networking-sfc "ASL 2.0" python3-networking-vmware-nsx "ASL 2.0" python3-network-runner017 "ASL 2.0" python3-networkx "BSD" python3-networkx-Core "BSD" python3-neutron "ASL 2.0" python3-neutronclient "ASL 2.0" python3-neutron-dynamic-routing "ASL 2.0" python3-neutron-lib "ASL 2.0" python3-neutron-lib-tests "ASL 2.0" python3-neutron-tests-tempest "ASL 2.0" python3-nova "ASL 2.0" python3-novaclient "ASL 2.0" python3-novajoin "ASL 2.0" python3-novajoin-tests-tempest "ASL 2.0" python3-numpy "BSD" python3-numpy-f2py "BSD and Python and ASL 2.0" python3-octavia "ASL 2.0" python3-octaviaclient "ASL 2.0" python3-octavia-lib "ASL 2.0" python3-octavia-tests-tempest "ASL 2.0" python3-octavia-tests-tempest-golang "ASL 2.0" python3-openshift "MIT" python3-openstackclient "ASL 2.0" python3-openstacksdk "ASL 2.0" python3-os-brick "ASL 2.0" python3-osc-lib "ASL 2.0" python3-os-client-config "ASL 2.0" python3-osc-placement "ASL 2.0" python3-os-ken "ASL 2.0" python3-oslo-cache "ASL 2.0" python3-oslo-concurrency "ASL 2.0" python3-oslo-config "ASL 2.0" python3-oslo-context "ASL 2.0" python3-oslo-db "ASL 2.0" python3-oslo-i18n "ASL 2.0" python3-oslo-log "ASL 2.0" python3-oslo-messaging "ASL 2.0" python3-oslo-middleware "ASL 2.0" python3-oslo-policy "ASL 2.0" python3-oslo-privsep "ASL 2.0" python3-oslo-reports "ASL 2.0" python3-oslo-rootwrap "ASL 2.0" python3-oslo-serialization "ASL 2.0" python3-oslo-service "ASL 2.0" python3-oslotest "ASL 2.0" python3-oslo-upgradecheck "ASL 2.0" python3-oslo-utils "ASL 2.0" python3-oslo-versionedobjects "ASL 2.0" python3-oslo-vmware "ASL 2.0" python3-osprofiler "ASL 2.0" python3-os-resource-classes "ASL 2.0" python3-os-service-types "ASL 2.0" python3-os-testr "ASL 2.0" python3-os-traits "ASL 2.0" python3-os-vif "ASL 2.0" python3-os-win "ASL 2.0" python3-os-xenapi "ASL 2.0" python3-ovirt-engine-sdk4 "ASL 2.0" python3-ovsdbapp "ASL 2.0" python3-panko "ASL 2.0" python3-pankoclient "ASL 2.0" python3-paramiko "LGPLv2+" python3-passlib "BSD and Beerware and Copyright only" python3-paste "MIT and ZPLv2.0 and Python and (AFL or MIT) and (MIT or ASL 2.0)" python3-paste-deploy "MIT" python3-patrole-tests-tempest "ASL 2.0" python3-paunch "ASL 2.0" python3-paunch-tests "ASL 2.0" python3-pbr "ASL 2.0" python3-pecan "BSD" python3-pexpect "MIT" python3-pint "BSD" python3-placement "ASL 2.0" python3-pluggy "MIT" python3-posix_ipc "BSD" python3-proliantutils "ASL 2.0" python3-prometheus_client "ASL 2.0" python3-protobuf "BSD" python3-psutil "BSD" python3-pyasn1 "BSD" python3-pyasn1-modules "BSD" python3-pycadf "ASL 2.0" python3-pycodestyle "MIT" python3-pyeclib "BSD" python3-pyflakes "MIT" python3-pyghmi "ASL 2.0" python3-pymemcache "ASL 2.0" python3-pynacl "ASL 2.0" python3-pyngus "ASL 2.0" python3-pyparsing "MIT" python3-pyrabbit2 "MIT" python3-pyroute2 "GPLv2+" python3-pysaml2 "ASL 2.0" python3-pysendfile "MIT" python3-pysnmp "BSD" python3-pystache "MIT" python3-pytest "MIT" python3-pytimeparse "MIT" python3-pyxattr "LGPLv2+" python3-qpid-proton "ASL 2.0" python3-rcssmin "ASL 2.0" python3-redis "MIT" python3-repoze-lru "BSD" python3-requestsexceptions "ASL 2.0" python3-requests-kerberos "MIT" python3-retrying "ASL 2.0" python3-rfc3986 "ASL 2.0" python3-rhosp-openvswitch "Public domain" python3-rjsmin "ASL 2.0" python3-routes "BSD" python3-rsa "ASL 2.0" python3-rsdclient "ASL 2.0" python3-rsd-lib "ASL 2.0" python3-ruamel-yaml "MIT" python3-s3transfer "ASL 2.0" python3-saharaclient "ASL 2.0" python3-scciclient "ASL 2.0" python3-scrypt "BSD" python3-scss "MIT" python3-SecretStorage "BSD" python3-setproctitle "BSD" python3-shade "ASL 2.0" python3-simplegeneric "Python or ZPLv2.1" python3-simplejson "(MIT or AFL) and (MIT or GPLv2)" python3-six "MIT" python3-smmap "BSD" python3-snappy "BSD" python3-sqlalchemy-collectd "MIT" python3-sqlalchemy-utils "BSD" python3-sqlparse "BSD" python3-statsd "MIT" python3-stestr "ASL 2.0" python3-stevedore "ASL 2.0" python3-string_utils "MIT" python3-subunit "ASL 2.0 or BSD" python3-sushy "ASL 2.0" python3-sushy-oem-idrac "ASL 2.0" python3-swift "ASL 2.0" python3-swiftclient "ASL 2.0" python3-sysv_ipc "GPLv3+" python3-tap-as-a-service "ASL 2.0" python3-taskflow "ASL 2.0" python3-telemetry-tests-tempest "ASL 2.0" python3-tempest "ASL 2.0" python3-tempestconf "ASL 2.0" python3-tempest-tests "ASL 2.0" python3-tempita "MIT" python3-tenacity "ASL 2.0" python3-testrepository "ASL 2.0" python3-testscenarios "ASL 2.0 and BSD" python3-testtools "MIT" python3-tinyrpc "MIT" python3-tooz "ASL 2.0" python3-traceback2 "Python" python3-tripleoclient "ASL 2.0" python3-tripleoclient-heat-installer "ASL 2.0" python3-tripleo-common "ASL 2.0" python3-tripleo-common-tests-tempest "ASL 2.0" python3-trollius "ASL 2.0" python3-troveclient "ASL 2.0" python3-twisted "MIT" python3-txaio "MIT" python3-ujson "BSD" python3-unittest2 "BSD" python3-urllib-gssapi "ASL 2.0" python3-validations-libs "ASL 2.0" python3-versiontools "LGPLv3" python3-vine "BSD" python3-vmware-nsxlib "ASL 2.0" python3-voluptuous "BSD" python3-waitress "ZPLv2.1" python3-warlock "ASL 2.0" python3-webob "MIT" python3-websocket-client "BSD" python3-websockify "LGPLv3" python3-webtest "MIT" python3-werkzeug "BSD" python3-wrapt "BSD" python3-wsaccel "ASL 2.0" python3-wsgi_intercept "MIT" python3-wsme "MIT" python3-XStatic "MIT" python3-XStatic-Angular "MIT" python3-XStatic-Angular-Bootstrap "MIT" python3-XStatic-Angular-FileUpload "MIT" python3-XStatic-Angular-Gettext "MIT" python3-XStatic-Angular-lrdragndrop "MIT" python3-XStatic-Angular-Schema-Form "MIT" python3-XStatic-Angular-UUID "MIT" python3-XStatic-Angular-Vis "MIT" python3-XStatic-Bootstrap-Datepicker "ASL 2.0" python3-XStatic-Bootstrap-SCSS "MIT" python3-XStatic-bootswatch "MIT" python3-XStatic-D3 "BSD" python3-XStatic-FileSaver "MIT" python3-XStatic-Font-Awesome "OFL and MIT" python3-XStatic-Hogan "ASL 2.0" python3-XStatic-Jasmine "MIT" python3-XStatic-jQuery224 "MIT" python3-XStatic-JQuery-Migrate "MIT" python3-XStatic-JQuery-quicksearch "MIT" python3-XStatic-JQuery-TableSorter "MIT" python3-XStatic-jquery-ui "CC0" python3-XStatic-JSEncrypt "MIT" python3-XStatic-Json2yaml "MIT" python3-XStatic-JS-Yaml "MIT" python3-XStatic-Magic-Search "ASL 2.0" python3-XStatic-mdi "OFL" python3-XStatic-objectpath "MIT" python3-XStatic-Rickshaw "MIT" python3-XStatic-roboto-fontface "ASL 2.0" python3-XStatic-smart-table "MIT" python3-XStatic-Spin "MIT" python3-XStatic-termjs "MIT" python3-XStatic-tv4 "Public Domain" python3-yappi "MIT" python3-yaql "ASL 2.0" python3-zake "ASL 2.0" python3-zaqarclient "ASL 2.0" python3-zaqar-tests-tempest "ASL 2.0" python3-zeroconf "LGPLv2" python3-zipp "MIT" python3-zope-event "ZPLv2.1" python3-zope-interface "ZPLv2.1" python-openstackclient-lang "ASL 2.0" python-oslo-cache-lang "ASL 2.0" python-oslo-concurrency-lang "ASL 2.0" python-oslo-db-lang "ASL 2.0" python-oslo-i18n-lang "ASL 2.0" python-oslo-log-lang "ASL 2.0" python-oslo-middleware-lang "ASL 2.0" python-oslo-policy-lang "ASL 2.0" python-oslo-privsep-lang "ASL 2.0" python-oslo-utils-lang "ASL 2.0" python-oslo-versionedobjects-lang "ASL 2.0" python-oslo-vmware-lang "ASL 2.0" python-pycadf-common "ASL 2.0" qpid-dispatch-router "ASL 2.0" qpid-dispatch-tools "ASL 2.0" qpid-proton-c "ASL 2.0" qpid-proton-c-devel "ASL 2.0" rabbitmq-server "MPLv1.1" rhosp-director-images "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-all "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-base "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-ipa-ppc64le "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-ipa-x86_64 "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-metadata "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-minimal "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-ppc64le "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-director-images-x86_64 "AFL and BSD and (BSD or GPLv2+) and BSD with advertising and Boost and GFDL and GPL and GPL+ and (GPL+ or Artistic) and GPLv1+ and GPLv2 and (GPLv2 or BSD) and (GPLv2 or GPLv3) and GPLv2 with exceptions and GPLv2+ and (GPLv2+ or AFL) and GPLv2+ with exceptions and GPLv3 and GPLv3+ and GPLv3+ with exceptions and IJG and ISC and LGPLv2 and LGPLv2+ and (LGPLv2+ or BSD) and (LGPLv2+ or MIT) and LGPLv2+ with exceptions and LGPLv2/GPLv2 and LGPLv3 and LGPLv3+ and MIT and (MIT or LGPLv2+ or BSD) and (MPLv1.1 or GPLv2+ or LGPLv2+) and Open Publication and OpenLDAP and OpenSSL and Public Domain and Python and (Python or ZPLv2.0) and Rdisc and Redistributable no modification permitted and SISSL and Vim and zlib" rhosp-network-scripts-openvswitch "Public domain" rhosp-openvswitch "Public domain" rhosp-ovn "Public domain" rhosp-ovn-central "Public domain" rhosp-ovn-host "Public domain" rhosp-ovn-vtep "Public domain" rhosp-release "GPLv2" roboto-fontface-common "ASL 2.0" roboto-fontface-fonts "ASL 2.0" ruby-augeas "LGPLv2+" ruby-facter "ASL 2.0" rubygem-pathspec "ASL 2.0" rubygem-rgen "MIT" ruby-shadow "Public Domain" subunit-filters "ASL 2.0 or BSD" sysbench "GPLv2+" tripleo-ansible "ASL 2.0" validations-common "ASL 2.0" web-assets-filesystem "Public Domain" web-assets-httpd "MIT" xstatic-angular-bootstrap-common "MIT" XStatic-Angular-common "MIT" xstatic-angular-fileupload-common "MIT" xstatic-angular-gettext-common "MIT" xstatic-angular-lrdragndrop-common "MIT" xstatic-angular-schema-form-common "MIT" xstatic-angular-uuid-common "MIT" xstatic-angular-vis-common "MIT" xstatic-bootstrap-datepicker-common "ASL 2.0" xstatic-bootstrap-scss-common "MIT" xstatic-d3-common "BSD" xstatic-filesaver-common "MIT" xstatic-hogan-common "ASL 2.0" xstatic-jasmine-common "MIT" xstatic-jquery-migrate-common "MIT" xstatic-jquery-quicksearch-common "MIT" xstatic-jquery-tablesorter-common "MIT" xstatic-jquery-ui-common "CC0" xstatic-jsencrypt-common "MIT" xstatic-json2yaml-common "MIT" xstatic-js-yaml-common "MIT" XStatic-Magic-Search-common "ASL 2.0" xstatic-objectpath-common "MIT" xstatic-rickshaw-common "MIT" xstatic-smart-table-common "MIT" xstatic-spin-common "MIT" xstatic-termjs-common "MIT" xstatic-tv4-common "Public Domain" yaml-cpp "MIT" | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/package_manifest/ch02 |
Chapter 19. DeclarativeConfigHealthService | Chapter 19. DeclarativeConfigHealthService 19.1. GetDeclarativeConfigHealths GET /v1/declarative-config/health 19.1.1. Description 19.1.2. Parameters 19.1.3. Return Type V1GetDeclarativeConfigHealthsResponse 19.1.4. Content Type application/json 19.1.5. Responses Table 19.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetDeclarativeConfigHealthsResponse 0 An unexpected error response. GooglerpcStatus 19.1.6. Samples 19.1.7. Common object reference 19.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 19.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 19.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 19.1.7.3. StorageDeclarativeConfigHealth Field Name Required Nullable Type Description Format id String name String status StorageDeclarativeConfigHealthStatus UNHEALTHY, HEALTHY, errorMessage String resourceName String resourceType StorageDeclarativeConfigHealthResourceType CONFIG_MAP, ACCESS_SCOPE, PERMISSION_SET, ROLE, AUTH_PROVIDER, GROUP, NOTIFIER, lastTimestamp Date Timestamp when the current status was set. date-time 19.1.7.4. StorageDeclarativeConfigHealthResourceType Enum Values CONFIG_MAP ACCESS_SCOPE PERMISSION_SET ROLE AUTH_PROVIDER GROUP NOTIFIER 19.1.7.5. StorageDeclarativeConfigHealthStatus Enum Values UNHEALTHY HEALTHY 19.1.7.6. V1GetDeclarativeConfigHealthsResponse Field Name Required Nullable Type Description Format healths List of StorageDeclarativeConfigHealth | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/declarativeconfighealthservice |
Release notes | Release notes Red Hat Advanced Cluster Security for Kubernetes 4.5 Highlights what is new and what has changed with Red Hat Advanced Cluster Security for Kubernetes releases Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/release_notes/index |
Chapter 10. Volume size overrides | Chapter 10. Volume size overrides You can specify the desired size of storage resources provisioned for managed components. The default size for Clair and the PostgreSQL databases is 50Gi . You can now choose a large enough capacity upfront, either for performance reasons or in the case where your storage backend does not have resize capability. In the following example, the volume size for the Clair and the Quay PostgreSQL databases has been set to 70Gi : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-example namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clair managed: true overrides: volumeSize: 70Gi - kind: postgres managed: true overrides: volumeSize: 70Gi - kind: clairpostgres managed: true Note The volume size of the clairpostgres component cannot be overridden. This is a known issue and will be fixed in a future version of Red Hat Quay.( PROJQUAY-4301 ) | [
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-example namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clair managed: true overrides: volumeSize: 70Gi - kind: postgres managed: true overrides: volumeSize: 70Gi - kind: clairpostgres managed: true"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_operator_features/operator-volume-size-overrides |
Chapter 10. GenericSecretSource schema reference | Chapter 10. GenericSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationCustom , KafkaListenerAuthenticationOAuth Property Property type Description key string The key under which the secret value is stored in the OpenShift Secret. secretName string The name of the OpenShift Secret containing the secret value. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-genericsecretsource-reference |
D.3. Managing Replicas and Replication Agreements | D.3. Managing Replicas and Replication Agreements This chapter provides details on replication agreements and describes how to manage them. Note For guidelines on setting up additional replication agreements, see Section 4.2.2, "Replica Topology Recommendations" . D.3.1. Explaining Replication Agreements Replicas are joined in a replication agreement that copies data between them. Replication agreements are bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Note An initial replication agreement is set up between two replicas by the ipa-replica-install script. See Chapter 4, Installing and Uninstalling Identity Management Replicas for details on installing the initial replica. Types of Replication Agreements Identity Management supports the following three types of replication agreements: Replication agreements to replicate directory data, such as users, groups, and policies. You can manage these agreements using the ipa-replica-manage utility. Replication agreements to replicate certificate server data. You can manage these agreements using the ipa-csreplica-manage utility. Synchronization agreements to replicate user information with an Active Directory server. These agreements are not described in this guide. For documentation on synchronizing IdM and Active Directory, see the Synchronizing Active Directory and Identity Management Users in the Windows Integration Guide . The ipa-replica-manage and ipa-csreplica-manage utilities use the same format and arguments. The following sections of this chapter describe the most notable replication management operations performed using these utilities. For detailed information about the utilities, see the ipa-replica-manage (1) and ipa-csreplica-manage (1) man pages. D.3.2. Listing Replication Agreements To list the directory data replication agreements currently configured for a replica, use the ipa-replica-manage list command: Run ipa-replica-manage list without any arguments to list all replicas in the replication topology. In the output, locate the required replica: Add the replica's host name to ipa-replica-manage list to list the replication agreements. The output displays the replicas to which server1.example.com sends updates. To list certificate server replication agreements, use the ipa-csreplica-manage list command. D.3.3. Creating and Removing Replication Agreements Creating Replication Agreements To create a new replication agreement, use the ipa-replica-manage connect command: The command creates a new bilateral replication agreement going from server1.example.com to server2.example.com and from server2.example.com to server1.example.com . If you only specify one server with ipa-replica-manage connect , IdM creates a replication agreement between the local host and the specified server. To create a new certificate server replication agreement, use the ipa-csreplica-manage connect command. Removing Replication Agreements To remove a replication agreement, use the ipa-replica-manage disconnect command: This command disables replication from server1.example.com to server4.example.com and from server4.example.com to server1.example.com . The ipa-replica-manage disconnect command only removes the replication agreement. It leaves both servers in the Identity Management replication topology. To remove all replication agreements and data related to a replica, use the ipa-replica-manage del command, which removes the replica entirely from the Identity Management domain. To remove a certificate server replication agreement, use the ipa-csreplica-manage disconnect command. Similarly, to remove all certificate replication agreements and data between two servers, use the ipa-csreplica-manage del command. D.3.4. Initiating a Manual Replication Update Data changes between replicas with direct replication agreements between each other are replicated almost instantaneously. However, replicas that are not joined in a direct replication agreement do not receive updates as quickly. In some situations, it might be necessary to manually initiate an unplanned replication update. For example, before taking a replica offline for maintenance, all the queued changes waiting for the planned update must be sent to one or more other replicas. In this situation, you can initiate a manual replication update before taking the replica offline. To manually initiate a replication update, use the ipa-replica-manage force-sync command. The local host on which you run the command is the replica that receives the update. To specify the replica that sends the update, use the --from option. To initiate a replication update for certificate server data, use the ipa-csreplica-manage force-sync command. D.3.5. Re-initializing a Replica If a replica has been offline for a long period of time or its database has been corrupted, you can re-initialize it. Re-initialization is analogous to initialization, which is described in Section 4.5, "Creating the Replica: Introduction" . Re-initialization refreshes the replica with an updated set of data. Re-initialization can, for example, be used if an authoritative restore from backup is required. Note Waiting for a regular replication update or initiating a manual replication update will not help in this situation. During these replication updates, replicas only send changed entries to each other. Unlike re-initialization, replication updates do not refresh the whole database. To re-initialize a data replication agreement on a replica, use the ipa-replica-manage re-initialize command. The local host on which you run the command is the re-initialized replica. To specify the replica from which the data is obtained, use the --from option: To re-initialize a certificate server replication agreement, use the ipa-csreplica-manage re-initialize command. D.3.6. Removing a Replica Deleting or demoting a replica removes the IdM replica from the topology so that it no longer processes IdM requests. It also removes the host machine itself from the IdM domain. To delete a replica, perform these steps on the replica: List all replication agreements for the IdM domain. In the output, note the host name of the replica. Use the ipa-replica-manage del command to remove all agreements configured for the replica as well as all data about the replica. If the replica was configured with its own CA, then also use the ipa-csreplica-manage del command to remove all certificate server replication agreements. Note This step is only required if the replica itself was configured with an IdM CA. It is not required if only the master server or other replicas were configured with a CA. Uninstall the IdM server package. | [
"ipa-replica-manage list server1.example.com : master server2.example.com: master server3.example.com: master server4.example.com: master",
"ipa-replica-manage list server1.example.com server2.example.com: replica server3.example.com: replica",
"ipa-replica-manage connect server1.example.com server2.example.com",
"ipa-replica-manage disconnect server1.example.com server4.example.com",
"ipa-replica-manage del server2.example.com",
"ipa-replica-manage force-sync --from server1.example.com",
"ipa-replica-manage re-initialize --from server1.example.com",
"ipa-replica-manage list server1.example.com: master server2.example.com: master server3.example.com: master server4.example.com: master",
"ipa-replica-manage del server3.example.com",
"ipa-csreplica-manage del server3.example.com",
"ipa-server-install --uninstall -U"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-topology-old |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.14/proc-providing-feedback-on-redhat-documentation |
Chapter 8. Assigning roles to hosts | Chapter 8. Assigning roles to hosts You can assign roles to your discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or worker . The host must meet the minimum requirements for the role you selected. You can find the hardware requirements by referring to the Prerequisites section of this document or using the preflight requirement API. If you do not select a role, the system selects one for you. You can change the role at any time before installation starts. 8.1. Selecting a role by using the web console You can select a role after the host finishes its discovery. Procedure Go to the Host Discovery tab and scroll down to the Host Inventory table. Select the Auto-assign drop-down for the required host. Select Control plane node to assign this host a control plane role. Select Worker to assign this host a worker role. Check the validation status. 8.2. Selecting a role by using the API You can select a role for the host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host can have one of the following roles: master : A host with the master role operates as a control plane node. worker : A host with the worker role operates as a worker node. By default, the Assisted Installer sets a host to auto-assign , which means the Assisted Installer will determine whether the host is a master or worker role automatically. Use this procedure to set the host's role. Prerequisites You have added hosts to the cluster. Procedure Refresh the API token: USD source refresh-token Get the host IDs: USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output [ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ] Modify the host_role setting: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Replace <host_id> with the ID of the host. 8.3. Auto-assigning roles Assisted Installer selects a role automatically for hosts if you do not assign a role yourself. The role selection mechanism factors the host's memory, CPU, and disk space. It aims to assign a control plane role to the weakest hosts that meet the minimum requirements for control plane nodes. The number of control planes you specify in the cluster definition determines the number of control plane nodes that the Assisted Installer assigns. For details, see Setting the cluster details . All other hosts default to worker nodes. The goal is to provide enough resources to run the control plane and reserve the more capacity-intensive hosts for running the actual workloads. You can override the auto-assign decision at any time before installation. The validations make sure that the auto selection is a valid one. 8.4. Additional resources Prerequisites | [
"source refresh-token",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'",
"[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_role-assignment |
Chapter 104. Platform HTTP | Chapter 104. Platform HTTP Since Camel 3.0 Only consumer is supported The Platform HTTP is used to allow Camel to use the existing HTTP server from the runtime, for example when running Camel on Spring Boot, Quarkus, or other runtimes. 104.1. Dependencies When using platform-http with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency> 104.2. Platform HTTP Provider To use Platform HTTP a provider (engine) is required to be available on the classpath. The purpose is to have drivers for different runtimes such as Quarkus, VertX, or Spring Boot. At this moment there is only support for Quarkus and VertX by camel-platform-http-vertx . This JAR must be on the classpath otherwise the Platform HTTP component cannot be used and an exception will be thrown on startup. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>4.4.0.redhat-00046</version> <!-- use the same version as your Camel core version --> </dependency> 104.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 104.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 104.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 104.4. Component Options The Platform HTTP component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean engine (advanced) An HTTP Server engine implementation to serve the requests. PlatformHttpEngine 104.4.1. Endpoint Options The Platform HTTP endpoint is configured using URI syntax: with the following path and query parameters: 104.4.1.1. Path Parameters (1 parameters) Name Description Default Type path (consumer) Required The path under which this endpoint serves the HTTP requests, for proxy use 'proxy'. String 104.4.1.2. Query Parameters (11 parameters) Name Description Default Type consumes (consumer) The content type this endpoint accepts as an input, such as application/xml or application/json. null or / mean no restriction. String httpMethodRestrict (consumer) A comma separated list of HTTP methods to serve, e.g. GET,POST . If no methods are specified, all methods will be served. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. true boolean produces (consumer) The content type this endpoint produces, such as application/xml or application/json. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern fileNameExtWhitelist (consumer (advanced)) A comma or whitespace separated list of file extensions. Uploads having these extensions will be stored locally. Null value or asterisk () will allow all files. String headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter headers to and from Camel message. HeaderFilterStrategy platformHttpEngine (advanced) An HTTP Server engine implementation to serve the requests of this endpoint. PlatformHttpEngine 104.5. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.platform-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.platform-http.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.platform-http.enabled Whether to enable auto configuration of the platform-http component. This is enabled by default. Boolean camel.component.platform-http.engine An HTTP Server engine implementation to serve the requests. The option is a org.apache.camel.component.platform.http.spi.PlatformHttpEngine type. PlatformHttpEngine 104.5.1. Implementing a reverse proxy Platform HTTP component can act as a reverse proxy, in that case some headers are populated from the absolute URL received on the request line of the HTTP request. Those headers are specific to the underlining platform. At this moment, this feature is only supported for Vert.x in camel-platform-http-vertx component. | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>4.4.0.redhat-00046</version> <!-- use the same version as your Camel core version --> </dependency>",
"platform-http:path"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-platform-http-component-starter |
Part I. New Features | Part I. New Features This part documents new features and major enhancements introduced in Red Hat Enterprise Linux 7.6. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new-features |
Reference Guide | Reference Guide Red Hat Enterprise Linux 4 For Red Hat Enterprise Linux 4 Edition 4 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/index |
17.2. DistributedCallable API | 17.2. DistributedCallable API The DistributedCallable interface is a subtype of the existing Callable from java.util.concurrent.package , and can be executed in a remote JVM and receive input from Red Hat JBoss Data Grid. The DistributedCallable interface is used to facilitate tasks that require access to JBoss Data Grid cache data. When using the DistributedCallable API to execute a task, the task's main algorithm remains unchanged, however the input source is changed. Users who have already implemented the Callable interface must extend DistributedCallable if access to the cache or the set of passed in keys is required. Example 17.1. Using the DistributedCallable API Report a bug | [
"public interface DistributedCallable<K, V, T> extends Callable<T> { /** * Invoked by execution environment after DistributedCallable * has been migrated for execution to a specific Infinispan node. * * @param cache * cache whose keys are used as input data for this * DistributedCallable task * @param inputKeys * keys used as input for this DistributedCallable task */ public void setEnvironment(Cache<K, V> cache, Set<K> inputKeys); }"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/DistributedCallable_API |
10.3. Delegating Permissions over Users | 10.3. Delegating Permissions over Users Delegation is very similar to roles in that one group of users is assigned permission to manage the entries for another group of users. However, the delegated authority is much more similar to self-service rules in that complete access is granted but only to specific user attributes, not to the entire entry. Also, the groups in delegated authority are existing IdM user groups instead of roles specifically created for access controls. 10.3.1. Delegating Access to User Groups in the Web UI On the IPA Server tab in the top menu, select the Role-Based Access Control Delegations subtab. Click the Add link at the top of the list of the delegation access control instructions. Figure 10.4. Adding a New Delegation Name the new delegation ACI. Set the permissions by selecting the check boxes whether users will have the right to view the given attributes (read) and add or change the given attributes (write). Some users may have a need to see information, but should not be able to edit it. In the User group drop-down menu, select the group who is being granted permissions to the entries of users in the user group. Figure 10.5. Form for Adding a Delegation In the Member user group drop-down menu, select the group whose entries can be edited by members of the delegation group. In the attributes box, select the check boxes by the attributes to which the member user group is being granted permission. Click the Add button to save the new delegation ACI. 10.3.2. Delegating Access to User Groups in the Command Line A new delegation access control rule is added using the delegation-add command. There are three required arguments: --group , the group who is being granted permissions to the entries of users in the user group. --membergroup , the group whose entries can be edited by members of the delegation group. --attrs , the attributes which users in the member group are allowed to view or edit. For example: Delegation rules are edited using the delegation-mod command. The --attrs option overwrites whatever the list of supported attributes was, so always include the complete list of attributes along with any new attributes. Important Include all of the attributes when modifying a delegation rule, including existing ones. | [
"ipa delegation-add \"basic manager attrs\" --attrs=manager --attrs=title --attrs=employeetype --attrs=employeenumber --group=engineering_managers --membergroup=engineering -------------------------------------- Added delegation \"basic manager attrs\" -------------------------------------- Delegation name: basic manager attrs Permissions: write Attributes: manager, title, employeetype, employeenumber Member user group: engineering User group: engineering_managers",
"[jsmith@server ~]USD ipa delegation-mod \"basic manager attrs\" --attrs=manager --attrs=title --attrs=employeetype --attrs=employeenumber --attrs=displayname ----------------------------------------- Modified delegation \"basic manager attrs\" ----------------------------------------- Delegation name: basic manager attrs Permissions: write Attributes: manager, title, employeetype, employeenumber, displayname Member user group: engineering User group: engineering_managers"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/delegating-users |
Chapter 1. Introduction to JDK Flight Recorder | Chapter 1. Introduction to JDK Flight Recorder JDK Flight Recorder (JFR) is a low overhead framework for monitoring and profiling Java applications. For more information, see JEP 328: Flight Recorder . You can collect data from events originating within the JVM and the application code. Data is then written in memory. At first, to thread-local buffer and then promoted to fixed-size global ring buffer before being flushed to JFR files (*.jfr) on the disk. Other applications can consume these files for analysis. For example, the JDK Mission Control (JMC) tool. 1.1. JDK Flight Recorder (JFR) components You can use JFR functionality to observe events that run inside a JVM, and then create recordings from data collected from these observed events. The following list details key JFR functionality: Recordings You can manage system recordings. Each recording has a unique configuration. You can start or stop the recording, or save it to disk on demand. Events You can use events or custom events to trace your Java application's data and metadata, and then save the data and metadata from either event type in a JFR file. You can use various tools, such as Java Mission Control (JMC), jcmd , and so on, to view and analyze information stored in a JFR file. The Java Virtual Machine (JVM) has many pre-existing events that are continuously added. An API is available for users to inject custom events into their applications. You can enable or disable any event when recording to minimize overhead by supplying event configurations. These configurations take the form of xml documents and are called JFR profiles ( *.jfc ). The Red Hat build of OpenJDK comes with the following two profiles for the most common set of use cases: default : The default profile is a low-overhead configuration that is safe for continuous use in production environments. Typically, overhead is less than 1%. profile : The profile profile is a low-overhead configuration that is ideal for profiling. Typically, overhead is less than 2%. 1.2. Benefits of using JDK Flight Recorder Some of the key benefits of using JDK Flight Recorder (JFR) are: JFR allows recording on a running JVM. It is ideal to use JFR in production environments where it is difficult to restart or rebuild the application. JFR allows for the definition of custom events and metrics to monitor. JFR is built into the JVM to achieve the minimum performance overhead (around 1%). JFR uses coherent data modeling to provide better cross-referencing of events and filtering of data. JFR allows for monitoring of third-party applications using APIs. JFR helps in reducing the cost of ownership by: Spending less time diagnosing. Aiding in troubleshooting problems. JFR reduces operating costs and business interruptions by: Providing faster resolution time. Identifying the performance issues which helps in improving system efficiency. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/openjdk-flight-recorded-overview |
Builds using Shipwright | Builds using Shipwright OpenShift Container Platform 4.15 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/builds_using_shipwright/index |
GitOps | GitOps OpenShift Container Platform 4.15 A declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/gitops/index |
Chapter 3. Binding [v1] | Chapter 3. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object Required target 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata target object ObjectReference contains enough information to let you inspect or modify the referred object. 3.1.1. .target Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/bindings POST : create a Binding /api/v1/namespaces/{namespace}/pods/{name}/binding POST : create binding of a Pod 3.2.1. /api/v1/namespaces/{namespace}/bindings Table 3.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a Binding Table 3.2. Body parameters Parameter Type Description body Binding schema Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty 3.2.2. /api/v1/namespaces/{namespace}/pods/{name}/binding Table 3.4. Global path parameters Parameter Type Description name string name of the Binding Table 3.5. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create binding of a Pod Table 3.6. Body parameters Parameter Type Description body Binding schema Table 3.7. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/metadata_apis/binding-v1 |
Chapter 1. Customizing nodes | Chapter 1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.16.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.16.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2 This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server. You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server. Tang Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible. Clevis is an automated decryption framework used to implement decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments For Assisted installer deployments: Each cluster can only have a single encryption method, Tang or TPM Encryption can be enabled on some or all nodes There is no Tang threshold; all servers must be valid and operational Encryption applies to the installation disks only, not to the workload disks Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. In the case of offline provisioning, the offline server is accessed using an included advertisement, and only uses that supplied advertisement if the number of online servers do not meet the set threshold. For example, the threshold value of 2 in the following configuration can be reached by accessing two Tang servers, with the offline server available as a backup, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.16.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" 4 threshold: 2 5 openshift: fips: true 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding rather than fetching the advertisement from the server at runtime. This lets the server be unavailable at provisioning time. 5 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible to reach the threshold value by using a single encryption mode. For example, if you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available. 1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure provided one device remains available. Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a pristine, non-degraded state. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to offer convenient, short-hand syntax for writing and validating machine configs. For more information, see "Creating machine configs with Butane". You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node. This is required on most Dell systems. Check the manual for your specific system. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang1.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang1.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command generates a thumbprint of the exchange key. No data passes to the encryption command during this step; /dev/null exists here as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Optional: For offline Tang provisioning: Obtain the advertisement from the server using the curl command. Replace http://tang2.example.com:7500 with the URL of your Tang server: USD curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws Expected output {"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"} Provide the advertisement file to Clevis for encryption: USD clevis-encrypt-tang '{"url":"http://tang2.example.com:7500","adv":"adv.jws"}' < /dev/null > /dev/null If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.16.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see "About disk encryption". 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Optional: Specify the advertisement for your offline Tang server in valid JSON format. 10 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information about this topic, see "Configuring an encryption threshold". 11 Include this section if you want to mirror the boot disk. For more details, see "About disk mirroring". 12 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 13 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. If you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node, for example: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices used by the software RAID device. List the file systems mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Note OpenShift Container Platform 4.16 does not support software RAIDs on the installation drive. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.16.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.16.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 1.4.5. Configuring an Intel(R) Virtual RAID on CPU (VROC) data volume Intel(R) VROC is a type of hybrid RAID, where some of the maintenance is offloaded to the hardware, but appears as software RAID to the operating system. The following procedure configures an Intel(R) VROC-enabled RAID1. Prerequisites You have a system with Intel(R) Volume Management Device (VMD) enabled. Procedure Create the Intel(R) Matrix Storage Manager (IMSM) RAID container by running the following command: USD mdadm -CR /dev/md/imsm0 -e \ imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1 1 The RAID device names. In this example, there are two devices listed. If you provide more than two device names, you must adjust the -n flag. For example, listing three devices would use the flag -n3 . Create the RAID1 storage inside the container: Create a dummy RAID0 volume in front of the real RAID1 volume by running the following command: USD mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean Create the real RAID1 array by running the following command: USD mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0 Stop both RAID0 and RAID1 member arrays and delete the dummy RAID0 array with the following commands: USD mdadm -S /dev/md/dummy \ mdadm -S /dev/md/coreos \ mdadm --kill-subarray=0 /dev/md/imsm0 Restart the RAID1 arrays by running the following command: USD mdadm -A /dev/md/coreos /dev/md/imsm0 Install RHCOS on the RAID1 device: Get the UUID of the IMSM container by running the following command: USD mdadm --detail --export /dev/md/imsm0 Install RHCOS and include the rd.md.uuid kernel argument by running the following command: USD coreos-installer install /dev/md/coreos \ --append-karg rd.md.uuid=<md_UUID> 1 ... 1 The UUID of the IMSM container. Include any additional coreos-installer arguments you need to install RHCOS. 1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography . | [
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane",
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane",
"chmod +x butane",
"echo USDPATH",
"butane <butane_file>",
"variant: openshift version: 4.16.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-custom.bu -o ./99-worker-custom.yaml",
"oc create -f 99-worker-custom.yaml",
"./openshift-install create manifests --dir <installation_directory>",
"cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"cd kmods-via-containers/",
"sudo make install",
"sudo systemctl daemon-reload",
"cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"cd kvc-simple-kmod",
"cat simple-kmod.conf",
"KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"",
"sudo make install",
"sudo kmods-via-containers build simple-kmod USD(uname -r)",
"sudo systemctl enable [email protected] --now",
"sudo systemctl status [email protected]",
"● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"dmesg | grep 'Hello world'",
"[ 6420.761332] Hello world from simple_kmod.",
"sudo cat /proc/simple-procfs-kmod",
"simple-procfs-kmod number = 0",
"sudo spkut 44",
"KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"FAKEROOT=USD(mktemp -d)",
"cd kmods-via-containers",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd ../kvc-simple-kmod",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree",
"variant: openshift version: 4.16.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true",
"butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml",
"oc create -f 99-simple-kmod.yaml",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"variant: openshift version: 4.16.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: \"{\\\"payload\\\": \\\"...\\\", \\\"protected\\\": \\\"...\\\", \\\"signature\\\": \\\"...\\\"}\" 4 threshold: 2 5 openshift: fips: true",
"sudo yum install clevis",
"clevis-encrypt-tang '{\"url\":\"http://tang1.example.com:7500\"}' < /dev/null > /dev/null 1",
"The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1",
"curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws",
"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}",
"clevis-encrypt-tang '{\"url\":\"http://tang2.example.com:7500\",\"adv\":\"adv.jws\"}' < /dev/null > /dev/null",
"./openshift-install create manifests --dir <installation_directory> 1",
"variant: openshift version: 4.16.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: \"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}\" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13",
"butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml",
"oc debug node/compute-1",
"chroot /host",
"cryptsetup status root",
"/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write",
"clevis luks list -d /dev/sda4 1",
"1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1",
"cat /proc/mdstat",
"Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>",
"mdadm --detail /dev/md126",
"/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8",
"mount | grep /dev/md",
"/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)",
"variant: openshift version: 4.16.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true",
"variant: openshift version: 4.16.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true",
"butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1",
"mdadm -CR /dev/md/imsm0 -e imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1",
"mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean",
"mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0",
"mdadm -S /dev/md/dummy mdadm -S /dev/md/coreos mdadm --kill-subarray=0 /dev/md/imsm0",
"mdadm -A /dev/md/coreos /dev/md/imsm0",
"mdadm --detail --export /dev/md/imsm0",
"coreos-installer install /dev/md/coreos --append-karg rd.md.uuid=<md_UUID> 1",
"variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installation_configuration/installing-customizing |
Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation | Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads Deployments . Click Workloads Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/backing-openshift-container-platform-applications-with-openshift-data-foundation_osp |
Chapter 12. EgressRouter [network.operator.openshift.io/v1] | Chapter 12. EgressRouter [network.operator.openshift.io/v1] Description EgressRouter is a feature allowing the user to define an egress router that acts as a bridge between pods and external systems. The egress router runs a service that redirects egress traffic originating from a pod or a group of pods to a remote external system or multiple destinations as per configuration. It is consumed by the cluster-network-operator. More specifically, given an EgressRouter CR with <name>, the CNO will create and manage: - A service called <name> - An egress pod called <name> - A NAD called <name> Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). EgressRouter is a single egressrouter pod configuration object. Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired egress router. status object Observed status of EgressRouter. 12.1.1. .spec Description Specification of the desired egress router. Type object Required addresses mode networkInterface Property Type Description addresses array List of IP addresses to configure on the pod's secondary interface. addresses[] object EgressRouterAddress contains a pair of IP CIDR and gateway to be configured on the router's interface mode string Mode depicts the mode that is used for the egress router. The default mode is "Redirect" and is the only supported mode currently. networkInterface object Specification of interface to create/use. The default is macvlan. Currently only macvlan is supported. redirect object Redirect represents the configuration parameters specific to redirect mode. 12.1.2. .spec.addresses Description List of IP addresses to configure on the pod's secondary interface. Type array 12.1.3. .spec.addresses[] Description EgressRouterAddress contains a pair of IP CIDR and gateway to be configured on the router's interface Type object Required ip Property Type Description gateway string IP address of the -hop gateway, if it cannot be automatically determined. Can be IPv4 or IPv6. ip string IP is the address to configure on the router's interface. Can be IPv4 or IPv6. 12.1.4. .spec.networkInterface Description Specification of interface to create/use. The default is macvlan. Currently only macvlan is supported. Type object Property Type Description macvlan object Arguments specific to the interfaceType macvlan 12.1.5. .spec.networkInterface.macvlan Description Arguments specific to the interfaceType macvlan Type object Required mode Property Type Description master string Name of the master interface. Need not be specified if it can be inferred from the IP address. mode string Mode depicts the mode that is used for the macvlan interface; one of Bridge|Private|VEPA|Passthru. The default mode is "Bridge". 12.1.6. .spec.redirect Description Redirect represents the configuration parameters specific to redirect mode. Type object Property Type Description fallbackIP string FallbackIP specifies the remote destination's IP address. Can be IPv4 or IPv6. If no redirect rules are specified, all traffic from the router are redirected to this IP. If redirect rules are specified, then any connections on any other port (undefined in the rules) on the router will be redirected to this IP. If redirect rules are specified and no fallback IP is provided, connections on other ports will simply be rejected. redirectRules array List of L4RedirectRules that define the DNAT redirection from the pod to the destination in redirect mode. redirectRules[] object L4RedirectRule defines a DNAT redirection from a given port to a destination IP and port. 12.1.7. .spec.redirect.redirectRules Description List of L4RedirectRules that define the DNAT redirection from the pod to the destination in redirect mode. Type array 12.1.8. .spec.redirect.redirectRules[] Description L4RedirectRule defines a DNAT redirection from a given port to a destination IP and port. Type object Required destinationIP port protocol Property Type Description destinationIP string IP specifies the remote destination's IP address. Can be IPv4 or IPv6. port integer Port is the port number to which clients should send traffic to be redirected. protocol string Protocol can be TCP, SCTP or UDP. targetPort integer TargetPort allows specifying the port number on the remote destination to which the traffic gets redirected to. If unspecified, the value from "Port" is used. 12.1.9. .status Description Observed status of EgressRouter. Type object Required conditions Property Type Description conditions array Observed status of the egress router conditions[] object EgressRouterStatusCondition represents the state of the egress router's managed and monitored components. 12.1.10. .status.conditions Description Observed status of the egress router Type array 12.1.11. .status.conditions[] Description EgressRouterStatusCondition represents the state of the egress router's managed and monitored components. Type object Required status type Property Type Description lastTransitionTime `` LastTransitionTime is the time of the last update to the current status property. message string Message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string Reason is the CamelCase reason for the condition's current status. status string Status of the condition, one of True, False, Unknown. type string Type specifies the aspect reported by this condition; one of Available, Progressing, Degraded 12.2. API endpoints The following API endpoints are available: /apis/network.operator.openshift.io/v1/egressrouters GET : list objects of kind EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters DELETE : delete collection of EgressRouter GET : list objects of kind EgressRouter POST : create an EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name} DELETE : delete an EgressRouter GET : read the specified EgressRouter PATCH : partially update the specified EgressRouter PUT : replace the specified EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name}/status GET : read status of the specified EgressRouter PATCH : partially update status of the specified EgressRouter PUT : replace status of the specified EgressRouter 12.2.1. /apis/network.operator.openshift.io/v1/egressrouters HTTP method GET Description list objects of kind EgressRouter Table 12.1. HTTP responses HTTP code Reponse body 200 - OK EgressRouterList schema 401 - Unauthorized Empty 12.2.2. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters HTTP method DELETE Description delete collection of EgressRouter Table 12.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressRouter Table 12.3. HTTP responses HTTP code Reponse body 200 - OK EgressRouterList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressRouter Table 12.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.5. Body parameters Parameter Type Description body EgressRouter schema Table 12.6. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 202 - Accepted EgressRouter schema 401 - Unauthorized Empty 12.2.3. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name} Table 12.7. Global path parameters Parameter Type Description name string name of the EgressRouter HTTP method DELETE Description delete an EgressRouter Table 12.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressRouter Table 12.10. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressRouter Table 12.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.12. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressRouter Table 12.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.14. Body parameters Parameter Type Description body EgressRouter schema Table 12.15. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 401 - Unauthorized Empty 12.2.4. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name}/status Table 12.16. Global path parameters Parameter Type Description name string name of the EgressRouter HTTP method GET Description read status of the specified EgressRouter Table 12.17. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressRouter Table 12.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.19. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressRouter Table 12.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.21. Body parameters Parameter Type Description body EgressRouter schema Table 12.22. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/egressrouter-network-operator-openshift-io-v1 |
Chapter 13. GenericKafkaListenerConfiguration schema reference | Chapter 13. GenericKafkaListenerConfiguration schema reference Used in: GenericKafkaListener Full list of GenericKafkaListenerConfiguration schema properties Configuration for Kafka listeners. 13.1. brokerCertChainAndKey The brokerCertChainAndKey property is only used with listeners that have TLS encryption enabled. You can use the property to provide your own Kafka listener certificates. Example configuration for a loadbalancer external listener with TLS encryption enabled listeners: #... - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... When the certificate or key in the brokerCertChainAndKey secret is updated, the operator will automatically detect it in the reconciliation and trigger a rolling update of the Kafka brokers to reload the certificate. 13.2. externalTrafficPolicy The externalTrafficPolicy property is used with loadbalancer and nodeport listeners. When exposing Kafka outside of OpenShift you can choose Local or Cluster . Local avoids hops to other nodes and preserves the client IP, whereas Cluster does neither. The default is Cluster . 13.3. loadBalancerSourceRanges The loadBalancerSourceRanges property is only used with loadbalancer listeners. When exposing Kafka outside of OpenShift use source ranges, in addition to labels and annotations, to customize how a service is created. Example source ranges configured for a loadbalancer listener listeners: #... - name: external3 port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... # ... 13.4. class The class property is only used with ingress listeners. You can configure the Ingress class using the class property. Example of an external listener of type ingress using Ingress class nginx-internal listeners: #... - name: external2 port: 9094 type: ingress tls: true configuration: class: nginx-internal # ... # ... 13.5. preferredNodePortAddressType The preferredNodePortAddressType property is only used with nodeport listeners. Use the preferredNodePortAddressType property in your listener configuration to specify the first address type checked as the node address. This property is useful, for example, if your deployment does not have DNS support, or you only want to expose a broker internally through an internal DNS or IP address. If an address of this type is found, it is used. If the preferred address type is not found, Streams for Apache Kafka proceeds through the types in the standard order of priority: ExternalDNS ExternalIP Hostname InternalDNS InternalIP Example of an external listener configured with a preferred node port address type listeners: #... - name: external4 port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS # ... # ... 13.6. useServiceDnsDomain The useServiceDnsDomain property is only used with internal and cluster-ip listeners. It defines whether the fully-qualified DNS names that include the cluster service suffix (usually .cluster.local ) are used. With useServiceDnsDomain set as false , the advertised addresses are generated without the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc . With useServiceDnsDomain set as true , the advertised addresses are generated with the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc.cluster.local . Default is false . Example of an internal listener configured to use the Service DNS domain listeners: #... - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true # ... # ... If your OpenShift cluster uses a different service suffix than .cluster.local , you can configure the suffix using the KUBERNETES_SERVICE_DNS_DOMAIN environment variable in the Cluster Operator configuration. 13.7. GenericKafkaListenerConfiguration schema properties Property Property type Description brokerCertChainAndKey CertAndKeySecretSource Reference to the Secret which holds the certificate and private key pair which will be used for this listener. The certificate can optionally contain the whole chain. This field can be used only with listeners with enabled TLS encryption. externalTrafficPolicy string (one of [Local, Cluster]) Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default.This field can be used only with loadbalancer or nodeport type listener. loadBalancerSourceRanges string array A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32 ) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. This field can be used only with loadbalancer type listener. bootstrap GenericKafkaListenerConfigurationBootstrap Bootstrap configuration. brokers GenericKafkaListenerConfigurationBroker array Per-broker configurations. ipFamilyPolicy string (one of [RequireDualStack, SingleStack, PreferDualStack]) Specifies the IP Family Policy used by the service. Available options are SingleStack , PreferDualStack and RequireDualStack . SingleStack is for a single IP family. PreferDualStack is for two IP families on dual-stack configured clusters or a single IP family on single-stack clusters. RequireDualStack fails unless there are two IP families on dual-stack configured clusters. If unspecified, OpenShift will choose the default value based on the service type. ipFamilies string (one or more of [IPv6, IPv4]) array Specifies the IP Families used by the service. Available options are IPv4 and IPv6 . If unspecified, OpenShift will choose the default value based on the ipFamilyPolicy setting. createBootstrapService boolean Whether to create the bootstrap service or not. The bootstrap service is created by default (if not specified differently). This field can be used with the loadBalancer type listener. class string Configures a specific class for Ingress and LoadBalancer that defines which controller will be used. This field can only be used with ingress and loadbalancer type listeners. If not specified, the default controller is used. For an ingress listener, set the ingressClassName property in the Ingress resources. For a loadbalancer listener, set the loadBalancerClass property in the Service resources. finalizers string array A list of finalizers which will be configured for the LoadBalancer type Services created for this listener. If supported by the platform, the finalizer service.kubernetes.io/load-balancer-cleanup to make sure that the external load balancer is deleted together with the service.For more information, see https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#garbage-collecting-load-balancers . This field can be used only with loadbalancer type listeners. maxConnectionCreationRate integer The maximum connection creation rate we allow in this listener at any time. New connections will be throttled if the limit is reached. maxConnections integer The maximum number of connections we allow for this listener in the broker at any time. New connections are blocked if the limit is reached. preferredNodePortAddressType string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) Defines which address type should be used as the node address. Available types are: ExternalDNS , ExternalIP , InternalDNS , InternalIP and Hostname . By default, the addresses will be used in the following order (the first one found will be used): ExternalDNS ExternalIP InternalDNS InternalIP Hostname This field is used to select the preferred address type, which is checked first. If no address is found for this address type, the other types are checked in the default order. This field can only be used with nodeport type listener. useServiceDnsDomain boolean Configures whether the OpenShift service DNS domain should be used or not. If set to true , the generated addresses will contain the service DNS domain suffix (by default .cluster.local , can be configured using environment variable KUBERNETES_SERVICE_DNS_DOMAIN ). Defaults to false .This field can be used only with internal and cluster-ip type listeners. | [
"listeners: # - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"listeners: # - name: external3 port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #",
"listeners: # - name: external2 port: 9094 type: ingress tls: true configuration: class: nginx-internal #",
"listeners: # - name: external4 port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #",
"listeners: # - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-generickafkalistenerconfiguration-reference |
Chapter 18. Running Skopeo, Buildah, and Podman in a container | Chapter 18. Running Skopeo, Buildah, and Podman in a container You can run Skopeo, Buildah, and Podman in a container. With Skopeo, you can inspect images on a remote registry without having to download the entire image with all its layers. You can also use Skopeo for copying images, signing images, syncing images, and converting images across different formats and layer compressions. Buildah facilitates building OCI container images. With Buildah, you can create a working container, either from scratch or using an image as a starting point. You can create an image either from a working container or using the instructions in a Containerfile . You can mount and unmount a working container's root filesystem. With Podman, you can manage containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman is based on a libpod library for container lifecycle management. The libpod library provides APIs for managing containers, pods, container images, and volumes. Reasons to run Buildah, Skopeo, and Podman in a container: CI/CD system : Podman and Skopeo : You can run a CI/CD system inside of Kubernetes or use OpenShift to build your container images, and possibly distribute those images across different container registries. To integrate Skopeo into a Kubernetes workflow, you must run it in a container. Buildah : You want to build OCI/container images within a Kubernetes or OpenShift CI/CD systems that are constantly building images. Previously, a Docker socket was used for connecting to the container engine and performing a docker build command. This was the equivalent of giving root access to the system without requiring a password which is not secure. For this reason, use Buildah in a container instead. Different versions : All : You are running an older operating system on the host but you want to run the latest version of Skopeo, Buildah, or Podman. The solution is to run the container tools in a container. For example, this is useful for running the latest version of the container tools provided in Red Hat Enterprise Linux 8 on a Red Hat Enterprise Linux 7 container host which does not have access to the newest versions natively. HPC environment : All : A common restriction in HPC environments is that non-root users are not allowed to install packages on the host. When you run Skopeo, Buildah, or Podman in a container, you can perform these specific tasks as a non-root user. 18.1. Running Skopeo in a container You can inspect a remote container image using Skopeo. Running Skopeo in a container means that the container root filesystem is isolated from the host root filesystem. To share or copy files between the host and container, you have to mount files and directories. Prerequisites The container-tools module is installed. Procedure Log in to the registry.redhat.io registry: Get the registry.redhat.io/rhel8/skopeo container image: Inspect a remote container image registry.access.redhat.com/ubi8/ubi using Skopeo: The --rm option removes the registry.redhat.io/rhel8/skopeo image after the container exits. Additional resources How to run skopeo in a container 18.2. Running Skopeo in a container using credentials Working with container registries requires an authentication to access and alter data. Skopeo supports various ways to specify credentials. With this approach you can specify credentials on the command line using the --cred USERNAME[:PASSWORD] option. Prerequisites The container-tools module is installed. Procedure Inspect a remote container image using Skopeo against a locked registry: Additional resources How to run skopeo in a container 18.3. Running Skopeo in a container using authfiles You can use an authentication file (authfile) to specify credentials. The skopeo login command logs into the specific registry and stores the authentication token in the authfile. The advantage of using authfiles is preventing the need to repeatedly enter credentials. When running on the same host, all container tools such as Skopeo, Buildah, and Podman share the same authfile. When running Skopeo in a container, you have to either share the authfile on the host by volume-mounting the authfile in the container, or you have to reauthenticate within the container. Prerequisites The container-tools module is installed. Procedure Inspect a remote container image using Skopeo against a locked registry: The -v USDAUTHFILE:/auth.json option volume-mounts an authfile at /auth.json within the container. Skopeo can now access the authentication tokens in the authfile on the host and get secure access to the registry. Other Skopeo commands work similarly, for example: Use the skopeo-copy command to specify credentials on the command line for the source and destination image using the --source-creds and --dest-creds options. It also reads the /auth.json authfile. If you want to specify separate authfiles for the source and destination image, use the --source-authfile and --dest-authfile options and volume-mount those authfiles from the host into the container. Additional resources How to run skopeo in a container 18.4. Copying container images to or from the host Skopeo, Buildah, and Podman share the same local container-image storage. If you want to copy containers to or from the host container storage, you need to mount it into the Skopeo container. Note The path to the host container storage differs between root ( /var/lib/containers/storage ) and non-root users ( USDHOME/.local/share/containers/storage ). Prerequisites The container-tools module is installed. Procedure Copy the registry.access.redhat.com/ubi8/ubi image into your local container storage: The --privileged option disables all security mechanisms. Red Hat recommends only using this option in trusted environments. To avoid disabling security mechanisms, export the images to a tarball or any other path-based image transport and mount them in the Skopeo container: USD podman save --format oci-archive -o oci.tar USDIMAGE USD podman run --rm -v oci.tar:/oci.tar registry.redhat.io/rhel8/skopeo copy oci-archive:/oci.tar USDDESTINATION Optional: List images in local storage: Additional resources How to run skopeo in a container 18.5. Running Buildah in a container The procedure demonstrates how to run Buildah in a container and create a working container based on an image. Prerequisites The container-tools module is installed. Procedure Log in to the registry.redhat.io registry: Pull and run the registry.redhat.io/rhel8/buildah image: The --rm option removes the registry.redhat.io/rhel8/buildah image after the container exits. The --device option adds a host device to the container. The sys_chroot - capability to change to a different root directory. It is not included in the default capabilities of a container. Create a new container using a registry.access.redhat.com/ubi8 image: Run the ls / command inside the ubi8-working-container container: Optional: List all images in a local storage: Optional: List the working containers and their base images: Optional: Push the registry.access.redhat.com/ubi8 image to the a local registry located on registry.example.com : Additional resources Best practices for running Buildah in a container 18.6. Privileged and unprivileged Podman containers By default, Podman containers are unprivileged and cannot, for example, modify parts of the operating system on the host. This is because by default a container is only allowed limited access to devices. The following list emphasizes important properties of privileged containers. You can run the privileged container using the podman run --privileged <image_name> command. A privileged container is given the same access to devices as the user launching the container. A privileged container disables the security features that isolate the container from the host. Dropped Capabilities, limited devices, read-only mount points, Apparmor/SELinux separation, and Seccomp filters are all disabled. A privileged container cannot have more privileges than the account that launched them. Additional resources How to use the --privileged flag with container engines podman-run man page on your system 18.7. Running Podman with extended privileges If you cannot run your workloads in a rootless environment, you need to run these workloads as a root user. Running a container with extended privileges should be done judiciously, because it disables all security features. Prerequisites The container-tools module is installed. Procedure Run the Podman container in the Podman container: Run the outer container named privileged_podman based on the registry.access.redhat.com/ubi8/podman image. The --privileged option disables the security features that isolate the container from the host. Run podman run ubi8 echo hello command to create the inner container based on the ubi8 image. Notice that the ubi8 short image name was resolved as an alias. As a result, the registry.access.redhat.com/ubi8:latest image is pulled. Verification List all containers: Additional resources How to use Podman inside of a container podman-run man page on your system 18.8. Running Podman with less privileges You can run two nested Podman containers without the --privileged option. Running the container without the --privileged option is a more secure option. This can be useful when you want to try out different versions of Podman in the most secure way possible. Prerequisites The container-tools module is installed. Procedure Run two nested containers: Run the outer container named unprivileged_podman based on the registry.access.redhat.com/ubi8/podman image. The --security-opt label=disable option disables SELinux separation on the host Podman. SELinux does not allow containerized processes to mount all of the file systems required to run inside a container. The --user podman option automatically causes the Podman inside the outer container to run within the user namespace. The --device /dev/fuse option uses the fuse-overlayfs package inside the container. This option adds /dev/fuse to the outer container, so that Podman inside the container can use it. Run podman run ubi8 echo hello command to create the inner container based on the ubi8 image. Notice that the ubi8 short image name was resolved as an alias. As a result, the registry.access.redhat.com/ubi8:latest image is pulled. Verification List all containers: 18.9. Building a container inside a Podman container You can run a container in a container using Podman. This example shows how to use Podman to build and run another container from within this container. The container will run "Moon-buggy", a simple text-based game. Prerequisites The container-tools module is installed. You are logged in to the registry.redhat.io registry: Procedure Run the container based on registry.redhat.io/rhel8/podman image: Run the outer container named podman_container based on the registry.redhat.io/rhel8/podman image. The --it option specifies that you want to run an interactive bash shell within a container. The --privileged option disables the security features that isolate the container from the host. Create a Containerfile inside the podman_container container: The commands in the Containerfile cause the following build command to: Build a container from the registry.access.redhat.com/ubi8/ubi image. Install the epel-release-latest-8.noarch.rpm package. Install the moon-buggy package. Set the container command. Build a new container image named moon-buggy using the Containerfile : Optional: List all images: Run a new container based on a moon-buggy container: Optional: Tag the moon-buggy image: Optional: Push the moon-buggy image to the registry: Additional resources Technology preview: Running a container inside a container | [
"podman login registry.redhat.io Username: [email protected] Password: <password> Login Succeeded!",
"podman pull registry.redhat.io/rhel8/skopeo",
"podman run --rm registry.redhat.io/rhel8/skopeo skopeo inspect docker://registry.access.redhat.com/ubi8/ubi { \"Name\": \"registry.access.redhat.com/ubi8/ubi\", \"Labels\": { \"architecture\": \"x86_64\", \"name\": \"ubi8\", \"summary\": \"Provides the latest release of Red Hat Universal Base Image 8.\", \"url\": \"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi8/images/8.2-347\", }, \"Architecture\": \"amd64\", \"Os\": \"linux\", \"Layers\": [ ], \"Env\": [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"container=oci\" ] }",
"podman run --rm registry.redhat.io/rhel8/skopeo inspect --creds USDUSER:USDPASSWORD docker://USDIMAGE",
"podman run --rm -v USDAUTHFILE:/auth.json registry.redhat.io/rhel8/skopeo inspect docker://USDIMAGE",
"podman run --privileged --rm -v USDHOME/.local/share/containers/storage:/var/lib/containers/storage registry.redhat.io/rhel8/skopeo skopeo copy docker://registry.access.redhat.com/ubi8/ubi containers-storage:registry.access.redhat.com/ubi8/ubi",
"podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest ecbc6f53bba0 8 weeks ago 211 MB",
"podman login registry.redhat.io Username: [email protected] Password: <password> Login Succeeded!",
"podman run --rm --device /dev/fuse -it registry.redhat.io/rhel8/buildah /bin/bash",
"buildah from registry.access.redhat.com/ubi8 ubi8-working-container",
"buildah run --isolation=chroot ubi8-working-container ls / bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv",
"buildah images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8 latest ecbc6f53bba0 5 weeks ago 211 MB",
"buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 0aaba7192762 * ecbc6f53bba0 registry.access.redhat.com/ub... ubi8-working-container",
"buildah push ecbc6f53bba0 registry.example.com:5000/ubi8/ubi",
"podman run --privileged --name=privileged_podman registry.access.redhat.com//podman podman run ubi8 echo hello Resolved \"ubi8\" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi8:latest Storing signatures hello",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 52537876caf4 registry.access.redhat.com/ubi8/podman podman run ubi8 e... 30 seconds ago Exited (0) 13 seconds ago privileged_podman",
"podman run --name=unprivileged_podman --security-opt label=disable --user podman --device /dev/fuse registry.access.redhat.com/ubi8/podman podman run ubi8 echo hello",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a47b26290f43 podman run ubi8 e... 30 seconds ago Exited (0) 13 seconds ago unprivileged_podman",
"podman login registry.redhat.io",
"podman run --privileged --name podman_container -it registry.redhat.io/rhel8/podman /bin/bash",
"vi Containerfile FROM registry.access.redhat.com/ubi8/ubi RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm RUN yum -y install moon-buggy && yum clean all CMD [\"/usr/bin/moon-buggy\"]",
"podman build -t moon-buggy .",
"podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/moon-buggy latest c97c58abb564 13 seconds ago 1.67 GB registry.access.redhat.com/ubi8/ubi latest 4199acc83c6a 132seconds ago 213 MB",
"podman run -it --name moon moon-buggy",
"podman tag moon-buggy registry.example.com/moon-buggy",
"podman push registry.example.com/moon-buggy"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_running-skopeo-buildah-and-podman-in-a-container |
Chapter 6. Installation configuration parameters for Azure Stack Hub | Chapter 6. Installation configuration parameters for Azure Stack Hub Before you deploy an OpenShift Container Platform cluster on Azure Stack Hub, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 6.1. Available installation configuration parameters for Azure Stack Hub The following tables specify the required, optional, and Azure Stack Hub-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 6.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 6.4. Additional Azure Stack Hub parameters Parameter Description Values The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . Defines the azure instance type for compute machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . Defines the type of disk. premium_LRS . Defines the azure instance type for control plane machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . The Azure instance type for control plane and compute machines. The Azure instance type. The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . The name of your Azure Stack Hub local region. String The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: type:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: type:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: armEndpoint:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: region:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: cloudName:",
"clusterOSImage:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure_stack_hub/installation-config-parameters-ash |
Chapter 3. Configuring multi-supplier replication using the command line | Chapter 3. Configuring multi-supplier replication using the command line In a multi-supplier replication environment, two or more writable suppliers replicate data with each other. For example, set up multi-supplier replication to provide a fail-over environment and distribute the load over multiple servers. Clients can then perform read and write operations on any host that is a read-write replica. This section assumes that you have an existing Directory Server instance running on a host named supplier1.example.com . The procedures describe how to add another read-write replica named supplier2.example.com to the topology, and how to configure multi-supplier replication for the dc=example,dc=com suffix. 3.1. Preparing the new supplier using the command line To prepare the supplier2.example.com host, enable replication. This process: Configures the role of this server in the replication topology Defines the suffix that is replicated Creates the replication manager account the supplier uses to connect to this host Perform this procedure on the supplier that you want to add to the replication topology. Prerequisites You installed the Directory Server instance. For details, see Setting up a new instance on the command line using a .inf file . The database for the dc=example,dc=com suffix exists. Procedure Enable replication for the dc=example,dc=com suffix: # dsconf -D "cn=Directory Manager" ldap://supplier2.example.com replication enable --suffix "dc=example,dc=com" --role "supplier" --replica-id 1 --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" This command configures the supplier2.example.com host as a supplier for the dc=example,dc=com suffix, and sets the replica ID of this entry to 1 . Additionally, the command creates the cn=replication manager,cn=config user with the specified password and allows this account to replicate changes for the suffix to this host. Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Verification Display the replication configuration: # dsconf -D "cn=Directory Manager" ldap://supplier2.example.com replication get --suffix "dc=example,dc=com" dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config ... nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 3 ... These parameters indicate: nsDS5ReplicaBindDN specifies the replication manager account. nsDS5ReplicaRoot sets the suffix that is replicated. nsDS5ReplicaType set to 3 defines that this host is a supplier. Additional resources Installing Red Hat Directory Server Storing suffixes in separate databases cn=replica,cn=suffix_DN,cn=mapping tree,cn=config 3.2. Configuring the existing server as a supplier to the new server using the command line To prepare the existing server supplier1.example.com as a supplier, you need to: Enable replication for the suffix. Create a replication agreement to the new supplier. Initialize the new supplier. Perform this procedure on the existing supplier in the replication topology. Prerequisites You enabled replication for the dc=example,dc=com suffix on the supplier to join. Procedure Enable replication for the dc=example,dc=com suffix: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com replication enable --suffix "dc=example,dc=com" --role "supplier" --replica-id 2 --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" This command configures the supplier1.example.com host as a supplier for the dc=example,dc=com suffix, and sets the replica ID of this entry to 2 . Additionally, the command creates the cn=replication manager,cn=config user with the specified password and allows this account to replicate changes for the suffix to this host. Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Add the replication agreement and initialize the new server: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com repl-agmt create --suffix "dc=example,dc=com" --host "supplier2.example.com" --port 389 --conn-protocol LDAP --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" --bind-method SIMPLE --init example-agreement-supplier1-to-supplier2 This command creates a replication agreement named example-agreement-supplier1-to-supplier2 . The replication agreement defines settings, such as the new supplier's host name, protocol, and authentication information that the supplier uses when connecting and replicating data to the new supplier. After the agreement was created, Directory Server initializes supplier2.example.com . Depending on the amount of data to replicate, initialization can be time-consuming. Verification Display the replication configuration: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com replication get --suffix "dc=example,dc=com" dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config ... nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 3 ... These parameters indicate: nsDS5ReplicaBindDN specifies the replication manager account. nsDS5ReplicaRoot sets the suffix that is replicated. nsDS5ReplicaType set to 3 defines that this host is a supplier. Verify whether the initialization was successful: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com repl-agmt init-status --suffix "dc=example,dc=com" example-agreement-supplier1-to-supplier2 Agreement successfully initialized. Display the replication status: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com repl-agmt status --suffix "dc=example,dc=com" example-agreement-supplier1-to-supplier2 Status For Agreement: "example-agreement-supplier1-to-supplier2" (supplier2.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331071545Z Last Update End: 20210331071546Z Number Of Changes Sent: 2:1/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331071541Z Last Init End: 20210331071544Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (6064219e000100020000) consumer (Unavailable) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded) Verify the Replication Status and Last Update Status fields. Troubleshooting By default, the replication idle timeout for all agreements on a server is 1 hour. If the initialization of large databases fails due to timeouts, set the nsslapd-idletimeout parameter to a higher value. For example, to set the parameter to 7200 (2 hours), enter: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com config replace nsslapd-idletimeout=7200 To set an unlimited period, set nsslapd-idletimeout to 0 . Additional resources cn=replica,cn=suffix_DN,cn=mapping tree,cn=config 3.3. Configuring the new server as a supplier to the existing server using the command line To prepare the new server supplier2.example.com as a supplier, use either of the following methods: Enable replication for the suffix. Create a replication agreement to the existing server. Warning Do not initialize the existing supplier from the new server. Otherwise, the empty database from the new server overrides the database on the existing supplier. Apply the following procedure on the existing supplier: Create a replication agreement to the new server. Initialize the new server. Prerequisites You enabled replication for the dc=example,dc=com suffix on the new server. You enabled replication for the dc=example,dc=com suffix on the existing server. The new server to join is successfully initialized. Procedure Add the replication agreement to the existing instance: # dsconf -D "cn=Directory Manager" ldap://supplier2.example.com repl-agmt create --suffix "dc=example,dc=com" --host "supplier1.example.com" --port 389 --conn-protocol LDAP --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" --bind-method SIMPLE example-agreement-supplier2-to-supplier1 Add the replication agreement to the new instance by using --init option: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com repl-agmt create --suffix "dc=example,dc=com" --host "supplier2.example.com" --port 389 --conn-protocol LDAP --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" --bind-method SIMPLE --init example-agreement-supplier1-to-supplier2 Verification Display the agreement status: # dsconf -D "cn=Directory Manager" ldap://supplier2.example.com repl-agmt init-status --suffix "dc=example,dc=com" example-agreement-supplier2-to-supplier1 Agreement successfully initialized. Display the replication status: # dsconf -D "cn=Directory Manager" ldap://supplier2.example.com repl-agmt status --suffix "dc=example,dc=com" example-agreement-supplier2-to-supplier1 Status For Agreement: ""example-agreement-supplier2-to-supplier1 (supplier1.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331073540Z Last Update End: 20210331073540Z Number Of Changes Sent: 7:1/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331073535Z Last Init End: 20210331073539Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (60642649000000070000) consumer (Unavailable) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded) Replication Lag Time: Unavailable Verify the Replication Status and Last Update Status fields. Troubleshooting By default, the replication idle timeout for all agreements on a server is 1 hour. If the initialization of large databases fails due to timeouts, set the nsslapd-idletimeout parameter to a higher value. For example, to set the parameter to 7200 (2 hours), enter: # dsconf -D "cn=Directory Manager" ldap://supplier2.example.com config replace nsslapd-idletimeout=7200 To set an unlimited period, set nsslapd-idletimeout to 0 . | [
"dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com replication enable --suffix \"dc=example,dc=com\" --role \"supplier\" --replica-id 1 --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\"",
"dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com replication get --suffix \"dc=example,dc=com\" dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 3",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com replication enable --suffix \"dc=example,dc=com\" --role \"supplier\" --replica-id 2 --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\"",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com repl-agmt create --suffix \"dc=example,dc=com\" --host \"supplier2.example.com\" --port 389 --conn-protocol LDAP --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\" --bind-method SIMPLE --init example-agreement-supplier1-to-supplier2",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com replication get --suffix \"dc=example,dc=com\" dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 3",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com repl-agmt init-status --suffix \"dc=example,dc=com\" example-agreement-supplier1-to-supplier2 Agreement successfully initialized.",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com repl-agmt status --suffix \"dc=example,dc=com\" example-agreement-supplier1-to-supplier2 Status For Agreement: \"example-agreement-supplier1-to-supplier2\" (supplier2.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331071545Z Last Update End: 20210331071546Z Number Of Changes Sent: 2:1/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331071541Z Last Init End: 20210331071544Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (6064219e000100020000) consumer (Unavailable) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded)",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com config replace nsslapd-idletimeout=7200",
"dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com repl-agmt create --suffix \"dc=example,dc=com\" --host \"supplier1.example.com\" --port 389 --conn-protocol LDAP --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\" --bind-method SIMPLE example-agreement-supplier2-to-supplier1",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com repl-agmt create --suffix \"dc=example,dc=com\" --host \"supplier2.example.com\" --port 389 --conn-protocol LDAP --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\" --bind-method SIMPLE --init example-agreement-supplier1-to-supplier2",
"dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com repl-agmt init-status --suffix \"dc=example,dc=com\" example-agreement-supplier2-to-supplier1 Agreement successfully initialized.",
"dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com repl-agmt status --suffix \"dc=example,dc=com\" example-agreement-supplier2-to-supplier1 Status For Agreement: \"\"example-agreement-supplier2-to-supplier1 (supplier1.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331073540Z Last Update End: 20210331073540Z Number Of Changes Sent: 7:1/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331073535Z Last Init End: 20210331073539Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (60642649000000070000) consumer (Unavailable) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded) Replication Lag Time: Unavailable",
"dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com config replace nsslapd-idletimeout=7200"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_configuring-multi-supplier-replication-using-the-command-line_configuring-and-managing-replication |
Chapter 5. Configuring Kerberos SSO for Active Directory users in Satellite | Chapter 5. Configuring Kerberos SSO for Active Directory users in Satellite If the base system of your Satellite Server is connected directly to Active Directory (AD), you can configure AD as an external authentication source for Satellite. Direct AD integration means that a Linux system is joined directly to the AD domain where the identity is stored. AD users can log in using the following methods: Username and password Kerberos single sign-on Note You can also connect your Satellite deployment to AD in the following ways: By using indirect AD integration. With indirect integration, your Satellite Server is connected to a Identity Management server which is then connected to AD. For more information, see Chapter 3, Configuring Kerberos SSO with Identity Management in Satellite . By attaching the LDAP server of the AD domain as an external authentication source with no single sign-on support. For more information, see Chapter 6, Configuring an LDAP server as an external identity provider for Satellite . For an example configuration, see How to configure Active Directory authentication with TLS on Satellite . 5.1. Configuring the Active Directory authentication source on Satellite Server Enable Active Directory (AD) users to access Satellite by configuring the corresponding authentication provider on your Satellite Server. Prerequisites The base system of your Satellite Server must be joined to an Active Directory (AD) domain. To enable AD users to sign in with Kerberos single sign-on, use the System Security Services Daemon (SSSD) and Samba services to join the base system to the AD domain: Install the following packages on Satellite Server: Specify the required software when joining the AD domain: For more information on direct AD integration, see Connecting RHEL systems directly to AD using Samba Winbind . Procedure Define AD realm configuration in a location where satellite-installer expects it: Create a directory named /etc/ipa/ : Create the /etc/ipa/default.conf file with the following contents to configure the Kerberos realm for the AD domain: Configure the Apache keytab for Kerberos connections: Update the /etc/samba/smb.conf file with the following settings to configure how Samba interacts with AD: Add the Kerberos service principal to the keytab file at /etc/httpd/conf/http.keytab : Configure the System Security Services Daemon (SSSD) to use the AD access control provider to evaluate and enforce Group Policy Object (GPO) access control rules for the foreman PAM service: In the [domain/ ad.example.com ] section of your /etc/sssd/sssd.conf file, configure the ad_gpo_access_control and ad_gpo_map_service options as follows: For more information on GPOs, see the following documents: How SSSD interprets GPO access control rules in Integrating RHEL systems directly with Windows Active Directory (RHEL 9) How SSSD interprets GPO access control rules in Integrating RHEL systems directly with Windows Active Directory (RHEL 8) Restart SSSD: Enable the authentication source: Verification To verify that AD users can log in to Satellite by entering their credentials, log in to Satellite web UI at https://satellite.example.com. Enter the user name in the user principal name (UPN) format, for example: ad_user @ AD.EXAMPLE.COM . To verify that AD users can authenticate by using Kerberos single sign-on: Obtain a Kerberos ticket-granting ticket (TGT) on behalf of an AD user: Verify user authentication by using your TGT: Troubleshooting Connecting to the AD LDAP can sometimes fail with an error such as the following appearing in the logs: If you see this error, verify which cipher is used for the connection: If the TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 cipher is used, disable it on either the Satellite Server side or on the AD side. The TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 cipher is known to cause incompatibilities. For more information, see the Red Hat Knowledgebase solution API calls to Red Hat Satellite 6 fail intermittently on LDAP authentication . Additional resources sssd-ad(5) man page on your system For information about configuring Mozilla Firefox for Kerberos, see Configuring Firefox to use Kerberos for single sign-on in Red Hat Enterprise Linux 9 Configuring authentication and authorization in RHEL . | [
"satellite-maintain packages install adcli krb5-workstation oddjob-mkhomedir oddjob realmd samba-winbind-clients samba-winbind samba-common-tools samba-winbind-krb5-locator sssd",
"realm join AD.EXAMPLE.COM --membership-software=samba --client-software=sssd",
"mkdir /etc/ipa/",
"[global] realm = AD.EXAMPLE.COM",
"[global] workgroup = AD.EXAMPLE realm = AD.EXAMPLE.COM kerberos method = system keytab security = ads",
"KRB5_KTNAME=FILE:/etc/httpd/conf/http.keytab net ads keytab add HTTP -U Administrator -s /etc/samba/smb.conf",
"[domain/ ad.example.com ] ad_gpo_access_control = enforcing ad_gpo_map_service = +foreman",
"systemctl restart sssd",
"satellite-installer --foreman-ipa-authentication=true",
"kinit ad_user @ AD.EXAMPLE.COM",
"curl -k -u : --negotiate https://satellite.example.com/users/extlogin <html><body>You are being <a href=\"satellite.example.com/hosts\">redirected</a>.</body></html>",
"Authentication failed with status code: { \"error\": { \"message\": \"ERF77-7629 [Foreman::LdapException]: Error while connecting to 'server.com' LDAP server at 'ldap.example.com' during authentication ([Net::LDAP::Error]: Connection reset by peer - SSL_connect)\" } }",
"openssl s_client -connect ldap.example.com :636"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_authentication_for_red_hat_satellite_users/configuring-kerberos-sso-for-active-directory-users-in-project_authentication |
Chapter 3. The gofmt formatting tool | Chapter 3. The gofmt formatting tool Instead of a style guide, the Go programming language uses the gofmt code formatting tool. gofmt automatically formats your code according to the Go layout rules. 3.1. Prerequisites Go Toolset is installed. For more information, see Installing Go Toolset . 3.2. Formatting code You can use the gofmt formatting tool to format code in a given path. When the path leads to a single file, the changes apply only to the file. When the path leads to a directory, all .go files in the directory are processed. Procedure To format your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to format. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to format. Note To print the formatted code to standard output instead of writing it to the original file, omit the -w option. 3.3. Previewing changes to code You can use the gofmt formatting tool to preview changes done by formatting code in a given path. The output in unified diff format is printed to standard output. Procedure To show differences in your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to compare. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to compare. 3.4. Simplifying code You can use the gofmt formatting tool to simplify your code. Procedure To simplify code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to simplify. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to simplify. To apply the changes, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to format. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to format. 3.5. Refactoring code You can use the gofmt formatting tool to refactor your code by applying arbitrary substitutions. Procedure To refactor your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to refactor and < rewrite_rule > with the rule you want it to be rewritten by. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to refactor and < rewrite_rule > with the rule you want it to be rewritten by. To apply the changes, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to format. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to format. 3.6. Additional resources The official gofmt documentation . | [
"gofmt -w < code_path >",
"gofmt -w < code_path >",
"gofmt -d < code_path >",
"gofmt -d < code_path >",
"gofmt -s -w < code_path >",
"gofmt -s -w < code_path >",
"gofmt -w < code_path >",
"gofmt -w < code_path >",
"gofmt -r -w < rewrite_rule > < code_path >",
"gofmt -r -w < rewrite_rule > < code_path >",
"gofmt -w < code_path >",
"gofmt -w < code_path >"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.22_toolset/assembly_the-gofmt-formatting-tool_using-go-toolset |
Chapter 2. Container security | Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.9 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Overview of Containers in Red Hat Systems from the RHEL 8 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the OpenShift Security Guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules FIPS cryptography Disk encryption Chrony time service About the OpenShift Update Service 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS compliance or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Enabling support for FIPS cryptography Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 8 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to Nodes Installation configuration parameters - see fips Support for FIPS cryptography RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.9.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . Additional resources Installing a cluster in FIPS mode 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7 and 8 ( ubi7/ubi and ubi8/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal and ubi8/ubi-mimimal ). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair security scanner . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.9 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift Container Platform registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage to allow for later search and analysis. Clair security scanning : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift Container Platform registry with Red Hat Quay. The Quay Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps About OpenShift Serverless Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or post-installation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the ingress controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. Additional resources Configuring network isolation using OpenShiftSDN 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall to control access to external IP addresses Configuring egress IPs for a project 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Container Storage * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Understanding OpenShift Logging Viewing audit logs | [
"variant: openshift version: 4.9.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }",
"butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml",
"oc apply -f 51-worker-rh-registry-trust.yaml",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1",
"oc debug node/<node_name>",
"sh-4.2# chroot /host",
"docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore",
"docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>",
"oc describe machineconfigpool/worker",
"Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3",
"oc debug node/<node> -- chroot /host cat /etc/containers/policy.json",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }",
"oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore",
"oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"quality.images.openshift.io/<qualityType>.<providerId>: {}",
"quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}",
"{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }",
"{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }",
"oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'",
"annotations: images.openshift.io/deny-execution: true",
"curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'",
"{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }",
"oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc",
"source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc",
"oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc",
"oc set triggers deploy/deployment-example --from-image=example:latest --containers=web",
"{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }",
"docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc get event -n default | grep Node",
"1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure",
"oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'",
"{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }",
"oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'",
"4"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/security_and_compliance/container-security-1 |
Chapter 3. Configuration of HawtIO | Chapter 3. Configuration of HawtIO HawtIO and its plugins can configure their behaviours through System properties. 3.1. Configuration properties The following table lists the configuration properties for the HawtIO core system and various plugins. System Default Description hawtio.disableProxy false With this property set to true, ProxyServlet (/hawtio/proxy/*) can be disabled. This makes the Connect plugin unavailable, which means HawtIO can no longer connect to remote JVMs, but sometimes users might want to do so because of security if the Connect plugin is not used. hawtio.localAddressProbing true Whether local address probing for proxy allowlist is enabled or not upon startup. Set this property to false to disable it. hawtio.proxyAllowlist localhost, 127.0.0.1 Comma-separated allowlist for target hosts that Connect plugin can connect to via ProxyServlet. All hosts not listed in this allowlist are denied to connect for security reasons. This option can be set to * to allow all hosts. Prefixing an element of the list with "r:" allows to define a regex (example: localhost,r:myserver[0-9]+.mydomain.com) hawtio.redirect.scheme The scheme is to redirect the URL to the login page when authentication is required. hawtio.sessionTimeout The maximum time interval, in seconds, that the servlet container will keep this session open between client accesses. If this option is not configured, then HawtIO uses the default session timeout of the servlet container. 3.1.1. Quarkus For Quarkus, all those properties are configurable in application.properties or application.yaml with the quarkus.hawtio prefix. For example: quarkus.hawtio.disableProxy = true 3.1.2. Spring Boot For Spring Boot, all those properties are configurable in application.properties or application.yaml as is. For example: hawtio.disableProxy = true 3.2. Configuring Jolokia through system properties The Jolokia agent is deployed automatically with io.hawt.web.JolokiaConfiguredAgentServlet that extends Jolokia native org.jolokia.http.AgentServlet class, defined in hawtio-war/WEB-INF/web.xml . If you want to customize the Jolokia Servlet with the configuration parameters that are defined in the Jolokia documentation , you can pass them as System properties prefixed with jolokia . For example: jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml 3.2.1. RBAC Restrictor For some runtimes that support HawtIO RBAC (role-based access control), HawtIO provides a custom Jolokia Restrictor implementation that provides an additional layer of protection over JMX operations based on the ACL (access control list) policy. Warning You cannot use HawtIO RBAC with Quarkus and Spring Boot yet. Enabling the RBAC Restrictor on those runtimes only imposes additional load without any gains. To activate the HawtIO RBAC Restrictor, configure the Jolokia parameter restrictorClass via System property to use io.hawt.web.RBACRestrictor as follows: jolokia.restrictorClass = io.hawt.system.RBACRestrictor | [
"quarkus.hawtio.disableProxy = true",
"hawtio.disableProxy = true",
"jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml",
"jolokia.restrictorClass = io.hawt.system.RBACRestrictor"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/hawtio_diagnostic_console_guide/configuration-of-hawtio |
Appendix A. Versioning information | Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_red_hat_decision_manager_on_red_hat_openshift_container_platform/versioning-information |
Chapter 2. Overview of responsibilities for Red Hat Advanced Cluster Security Cloud Service | Chapter 2. Overview of responsibilities for Red Hat Advanced Cluster Security Cloud Service This documentation outlines Red Hat and customer responsibilities for the RHACS Cloud Service managed service. 2.1. Shared responsibilities for RHACS Cloud Service While Red Hat manages the RHACS Cloud Service services, also referred to as Central services , the customer has certain responsibilities. Resource or action Red Hat responsibility Customer responsibility Hosted components, also called Central components Platform monitoring Software updates High availability Backup and restore Security Infrastructure configuration Scaling Maintenance Vulnerability management Access and identity authorization Secured clusters (on-premise or cloud) Software updates Backup and restore Security Infrastructure configuration Scaling Maintenance Access and identity authorization Vulnerability management | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/rhacs_cloud_service/overview-of-responsibilities-for-rhacs-cloud-service |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.22/pr01 |
Chapter 5. RoleBinding [authorization.openshift.io/v1] | Chapter 5. RoleBinding [authorization.openshift.io/v1] Description RoleBinding references a Role, but not contain it. It can reference any Role in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required subjects roleRef 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources groupNames array (string) GroupNames holds all the groups directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata roleRef ObjectReference RoleRef can only reference the current namespace and the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. Since Policy is a singleton, this is sufficient knowledge to locate a role. subjects array (ObjectReference) Subjects hold object references to authorize with this rule. This field is ignored if UserNames or GroupNames are specified to support legacy clients and servers. Thus newer clients that do not need to support backwards compatibility should send only fully qualified Subjects and should omit the UserNames and GroupNames fields. Clients that need to support backwards compatibility can use this field to build the UserNames and GroupNames. userNames array (string) UserNames holds all the usernames directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. 5.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/rolebindings GET : list objects of kind RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings GET : list objects of kind RoleBinding POST : create a RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding 5.2.1. /apis/authorization.openshift.io/v1/rolebindings Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind RoleBinding Table 5.2. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 5.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings Table 5.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list objects of kind RoleBinding Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body RoleBinding schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 5.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} Table 5.10. Global path parameters Parameter Type Description name string name of the RoleBinding namespace string object name and auth scope, such as for teams and projects Table 5.11. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a RoleBinding Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.13. Body parameters Parameter Type Description body DeleteOptions schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 5.15. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body RoleBinding schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/role_apis/rolebinding-authorization-openshift-io-v1 |
Chapter 65. Kubernetes Nodes | Chapter 65. Kubernetes Nodes Since Camel 2.17 Both producer and consumer are supported The Kubernetes Nodes component is one of the Kubernetes Components which provides a producer to execute Kubernetes Node operations and a consumer to consume events related to Node objects. 65.1. Dependencies When using kubernetes-nodes with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 65.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 65.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 65.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 65.3. Component Options The Kubernetes Nodes component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 65.4. Endpoint Options The Kubernetes Nodes endpoint is configured using URI syntax: with the following path and query parameters: 65.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 65.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 65.5. Message Headers The Kubernetes Nodes component supports 6 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNodesLabels (producer) Constant: KUBERNETES_NODES_LABELS The node labels. Map CamelKubernetesNodeName (producer) Constant: KUBERNETES_NODE_NAME The node name. String CamelKubernetesNodeSpec (producer) Constant: KUBERNETES_NODE_SPEC The spec for a node. NodeSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 65.6. Supported producer operation listNodes listNodesByLabels getNode createNode updateNode deleteNode 65.7. Kubernetes Nodes Producer Examples listNodes: this operation list the nodes on a kubernetes cluster. from("direct:list"). toF("kubernetes-nodes:///?kubernetesClient=#kubernetesClient&operation=listNodes"). to("mock:result"); This operation returns a List of Nodes from your cluster. listNodesByLabels: this operation list the nodes by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NODES_LABELS, labels); } }); toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels"). to("mock:result"); This operation returns a List of Nodes from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 65.8. Kubernetes Nodes Consumer Example fromF("kubernetes-nodes://%s?oauthToken=%s&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Node node = exchange.getIn().getBody(Node.class); log.info("Got event with configmap name: " + node.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events for the node test. 65.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"kubernetes-nodes:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-nodes:///?kubernetesClient=#kubernetesClient&operation=listNodes\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NODES_LABELS, labels); } }); toF(\"kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels\"). to(\"mock:result\");",
"fromF(\"kubernetes-nodes://%s?oauthToken=%s&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Node node = exchange.getIn().getBody(Node.class); log.info(\"Got event with configmap name: \" + node.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-nodes-component-starter |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 6. Updating | Chapter 6. Updating 6.1. Updating OpenShift Virtualization Learn how to keep OpenShift Virtualization updated and compatible with OpenShift Container Platform. 6.1.1. About updating OpenShift Virtualization When you install OpenShift Virtualization, you select an update channel and an approval strategy. The update channel determines the versions that OpenShift Virtualization will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. Both settings can impact supportability. 6.1.1.1. Recommended settings To maintain a supportable environment, use the following settings: Update channel: stable Approval strategy: Automatic With these settings, the update process automatically starts when a new version of the Operator is available in the stable channel. This ensures that your OpenShift Virtualization and OpenShift Container Platform versions remain compatible, and that your version of OpenShift Virtualization is suitable for production environments. Note Each minor version of OpenShift Virtualization is supported only if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.18 on OpenShift Container Platform 4.18. 6.1.1.2. What to expect The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. Updating OpenShift Virtualization does not interrupt network connections. Data volumes and their associated persistent volume claims are preserved during an update. Important If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update. As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Set the evictionStrategy field to None and the runStrategy field to Always . 6.1.1.3. How updates work Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster. OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the minor version. You cannot update OpenShift Virtualization to the minor version without first updating OpenShift Container Platform. 6.1.1.4. RHEL 9 compatibility OpenShift Virtualization 4.18 is based on Red Hat Enterprise Linux (RHEL) 9. You can update to OpenShift Virtualization 4.18 from a version that was based on RHEL 8 by following the standard OpenShift Virtualization update procedure. No additional steps are required. As in versions, you can perform the update without disrupting running workloads. OpenShift Virtualization 4.18 supports live migration from RHEL 8 nodes to RHEL 9 nodes. 6.1.1.4.1. RHEL 9 machine type All VM templates that are included with OpenShift Virtualization now use the RHEL 9 machine type by default: machineType: pc-q35-rhel9.<y>.0 , where <y> is a single digit corresponding to the latest minor version of RHEL 9. For example, the value pc-q35-rhel9.2.0 is used for RHEL 9.2. Updating OpenShift Virtualization does not change the machineType value of any existing VMs. These VMs continue to function as they did before the update. You can optionally change a VM's machine type so that it can benefit from RHEL 9 improvements. Important Before you change a VM's machineType value, you must shut down the VM. 6.1.2. Monitoring update status To monitor the status of a OpenShift Virtualization Operator update, watch the cluster service version (CSV) PHASE . You can also monitor the CSV conditions in the web console or by running the command provided here. Note The PHASE and conditions values are approximations that are based on available information. Prerequisites Log in to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Run the following command: USD oc get csv -n openshift-cnv Review the output, checking the PHASE field. For example: Example output VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}' A successful upgrade results in the following output: Example output ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully 6.1.3. VM workload updates When you update OpenShift Virtualization, virtual machine workloads, including libvirt , virt-launcher , and qemu , update automatically if they support live migration. Note Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt , which is used to manage the virtual machine (VM) process. You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict . Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default. When LiveMigrate is the only update strategy enabled: VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled. VMIs that do not support live migration are not disrupted or updated. If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated. If you enable both LiveMigrate and Evict : VMIs that support live migration use the LiveMigrate update strategy. VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has runStrategy: Always set, a new VMI is created in a new pod with updated components. Migration attempts and timeouts When updating workloads, live migration fails if a pod is in the Pending state for the following periods: 5 minutes If the pod is pending because it is Unschedulable . 15 minutes If the pod is stuck in the pending state for any reason. When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely. Note Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging. 6.1.3.1. Configuring workload update methods You can configure workload update methods by editing the HyperConverged custom resource (CR). Prerequisites To use live migration as an update method, you must first enable live migration in the cluster. Note If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update. Procedure To open the HyperConverged CR in your default editor, run the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: "1m0s" 5 # ... 1 The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict . If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty. 2 The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. 3 A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: Always configured, a new VMI is created in a new pod with updated components. 4 The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method. 5 The interval to wait before evicting the batch of workloads. This does not apply to the LiveMigrate method. Note You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR. To apply your changes, save and exit the editor. 6.1.3.2. Viewing outdated VM workloads You can view a list of outdated virtual machine (VM) workloads by using the CLI. Note If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires. Procedure To view a list of outdated virtual machine instances (VMIs), run the following command: USD oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces Note To ensure that VMIs update automatically, configure workload updates. 6.1.4. Control Plane Only updates Every even-numbered minor version of OpenShift Container Platform, including 4.10 and 4.12, is an Extended Update Support (EUS) version. However, because Kubernetes design mandates serial minor version updates, you cannot directly update from one EUS version to the . After you update from the source EUS version to the odd-numbered minor version, you must sequentially update OpenShift Virtualization to all z-stream releases of that minor version that are on your update path. When you have upgraded to the latest applicable z-stream version, you can then update OpenShift Container Platform to the target EUS minor version. When the OpenShift Container Platform update succeeds, the corresponding update for OpenShift Virtualization becomes available. You can now update OpenShift Virtualization to the target EUS version. For more information about EUS versions, see the Red Hat OpenShift Container Platform Life Cycle Policy . 6.1.4.1. Prerequisites Before beginning a Control Plane Only update, you must: Pause worker nodes' machine config pools before you start a Control Plane Only update so that the workers are not rebooted twice. Disable automatic workload updates before you begin the update process. This is to prevent OpenShift Virtualization from migrating or evicting your virtual machines (VMs) until you update to your target EUS version. Note By default, OpenShift Virtualization automatically updates workloads, such as the virt-launcher pod, when you update the OpenShift Virtualization Operator. You can configure this behavior in the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource. Learn more about Performing a Control Plane Only update . 6.1.4.2. Preventing workload updates during a Control Plane Only update When you update from one Extended Update Support (EUS) version to the , you must manually disable automatic workload updates to prevent OpenShift Virtualization from migrating or evicting workloads during the update process. Important In OpenShift Container Platform 4.16, the underlying Red Hat Enterprise Linux CoreOS (RHCOS) upgraded to version 9.4 of Red Hat Enterprise Linux (RHEL). To operate correctly, all virt-launcher pods in the cluster need to use the same version of RHEL. After upgrading to OpenShift Container Platform 4.16 from an earlier version, re-enable workload updates in OpenShift Virtualization to allow virt-launcher pods to update. Before upgrading to the OpenShift Container Platform version, verify that all VMIs use up-to-date workloads: USD oc get kv kubevirt-kubevirt-hyperconverged -o json -n openshift-cnv | jq .status.outdatedVirtualMachineInstanceWorkloads If the command returns a value larger than 0 , list all VMIs with outdated virt-launcher pods and start live migration to update them to a new version: USD oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces For the list of supported OpenShift Container Platform releases and the RHEL versions they use, see RHEL Versions Utilized by RHCOS and OpenShift Container Platform . Prerequisites You are running an EUS version of OpenShift Container Platform and want to update to the EUS version. You have not yet updated to the odd-numbered version in between. You read "Preparing to perform a Control Plane Only update" and learned the caveats and requirements that pertain to your OpenShift Container Platform cluster. You paused the worker nodes' machine config pools as directed by the OpenShift Container Platform documentation. It is recommended that you use the default Automatic approval strategy. If you use the Manual approval strategy, you must approve all pending updates in the web console. For more details, refer to the "Manually approving a pending Operator update" section. Procedure Run the following command and record the workloadUpdateMethods configuration: USD oc get kv kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}' Turn off all workload update methods by running the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]' Example output hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched Ensure that the HyperConverged Operator is Upgradeable before you continue. Enter the following command and monitor the output: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions" Example 6.1. Example output [ { "lastTransitionTime": "2022-12-09T16:29:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "ReconcileComplete" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Progressing" }, { "lastTransitionTime": "2022-12-09T16:39:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Degraded" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Upgradeable" 1 } ] 1 The OpenShift Virtualization Operator has the Upgradeable status. Manually update your cluster from the source EUS version to the minor version of OpenShift Container Platform: USD oc adm upgrade Verification Check the current version by running the following command: USD oc get clusterversion Note Updating OpenShift Container Platform to the version is a prerequisite for updating OpenShift Virtualization. For more details, refer to the "Updating clusters" section of the OpenShift Container Platform documentation. Update OpenShift Virtualization. With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform. If you use the Manual approval strategy, approve the pending updates by using the web console. Monitor the OpenShift Virtualization update by running the following command: USD oc get csv -n openshift-cnv Update OpenShift Virtualization to every z-stream version that is available for the non-EUS minor version, monitoring each update by running the command shown in the step. Confirm that OpenShift Virtualization successfully updated to the latest z-stream release of the non-EUS version by running the following command: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.versions" Example output [ { "name": "operator", "version": "4.18.0" } ] Wait until the HyperConverged Operator has the Upgradeable status before you perform the update. Enter the following command and monitor the output: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions" Update OpenShift Container Platform to the target EUS version. Confirm that the update succeeded by checking the cluster version: USD oc get clusterversion Update OpenShift Virtualization to the target EUS version. With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform. If you use the Manual approval strategy, approve the pending updates by using the web console. Monitor the OpenShift Virtualization update by running the following command: USD oc get csv -n openshift-cnv The update completes when the VERSION field matches the target EUS version and the PHASE field reads Succeeded . Restore the workloadUpdateMethods configuration that you recorded from step 1 with the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \ "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":{WorkloadUpdateMethodConfig}}]" Example output hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched Verification Check the status of VM migration by running the following command: USD oc get vmim -A steps Unpause the machine config pools for each compute node. 6.1.5. Advanced options The stable release channel and the Automatic approval strategy are recommended for most OpenShift Virtualization installations. Use other settings only if you understand the risks. 6.1.5.1. Changing update settings You can change the update channel and approval strategy for your OpenShift Virtualization Operator subscription by using the web console. Prerequisites You have installed the OpenShift Virtualization Operator. You have administrator permissions. Procedure Click Operators Installed Operators . Select OpenShift Virtualization from the list. Click the Subscription tab. In the Subscription details section, click the setting that you want to change. For example, to change the approval strategy from Manual to Automatic , click Manual . In the window that opens, select the new update channel or approval strategy. Click Save . 6.1.5.2. Manual approval strategy If you use the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. To avoid risking the supportability and functionality of your cluster, use the Automatic approval strategy. If you must use the Manual approval strategy, maintain a supportable cluster by approving pending Operator updates as soon as they become available. 6.1.5.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.1.6. Additional resources Performing a Control Plane Only update What are Operators? Operator Lifecycle Manager concepts and resources Cluster service versions (CSVs) About live migration Configuring eviction strategies Configuring live migration limits and timeouts | [
"oc get csv -n openshift-cnv",
"VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'",
"ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5",
"oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces",
"oc get kv kubevirt-kubevirt-hyperconverged -o json -n openshift-cnv | jq .status.outdatedVirtualMachineInstanceWorkloads",
"oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces",
"oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":[]}]'",
"hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"",
"[ { \"lastTransitionTime\": \"2022-12-09T16:29:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"ReconcileComplete\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Available\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Progressing\" }, { \"lastTransitionTime\": \"2022-12-09T16:39:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Degraded\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Upgradeable\" 1 } ]",
"oc adm upgrade",
"oc get clusterversion",
"oc get csv -n openshift-cnv",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.versions\"",
"[ { \"name\": \"operator\", \"version\": \"4.18.0\" } ]",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"",
"oc get clusterversion",
"oc get csv -n openshift-cnv",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \"[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/spec/workloadUpdateStrategy/workloadUpdateMethods\\\", \\\"value\\\":{WorkloadUpdateMethodConfig}}]\"",
"hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched",
"oc get vmim -A"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/virtualization/updating |
Chapter 3. Verifying OpenShift Data Foundation deployment for internal mode | Chapter 3. Verifying OpenShift Data Foundation deployment for internal mode Use this section to verify that OpenShift Data Foundation is deployed correctly. Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 3.1. Verifying the state of the pods To determine if OpenShift Data Foundation is deployed successfully, you can verify that the pods are in Running state. Procedure Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation cluster" . Verify that the following pods are in running and completed state by clicking the Running and the Completed tabs: Table 3.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) OpenShift Data Foundation Client Operator ocs-client-operator-console-* (1 pod on any storage node) ocs-client-operator-controller-manager-* (1 pod on any storage node) UX Backend ux-backend-server-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods on each storage node) MGR rook-ceph-mgr-* (2 pods distributed across storage nodes) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage node) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) rook-ceph-exporter rook-ceph-exporter-worker-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-* (1 pod for each device) 3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard under Overview tab, verify that both Storage Cluster and Data Resiliency has a green tick mark. In the Details card , verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_power/verifying_openshift_data_foundation_deployment_for_internal_mode |
Chapter 5. OpenShift Data Foundation upgrade overview | Chapter 5. OpenShift Data Foundation upgrade overview As an operator bundle managed by the Operator Lifecycle Manager (OLM), OpenShift Data Foundation leverages its operators to perform high-level tasks of installing and upgrading the product through ClusterServiceVersion (CSV) CRs. 5.1. Upgrade Workflows OpenShift Data Foundation recognizes two types of upgrades: Z-stream release upgrades and Minor Version release upgrades. While the user interface workflows for these two upgrade paths are not quite the same, the resulting behaviors are fairly similar. The distinctions are as follows: For Z-stream releases, OCS will publish a new bundle in the redhat-operators CatalogSource . The OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. The Subscription approval strategy, whether Automatic or Manual, will determine whether the OLM proceeds with reconciliation or waits for administrator approval. For Minor Version releases, OpenShift Container Storage will also publish a new bundle in the redhat-operators CatalogSource . The difference is that this bundle will be part of a new channel, and channel upgrades are not automatic. The administrator must explicitly select the new release channel. Once this is done, the OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. Since the channel switch is a manual operation, OLM will automatically start the reconciliation. From this point onwards, the upgrade processes are identical. 5.2. ClusterServiceVersion Reconciliation When the OLM detects an approved InstallPlan , it begins the process of reconciling the CSVs. Broadly, it does this by updating the operator resources based on the new spec, verifying the new CSV installs correctly, then deleting the old CSV. The upgrade process will push updates to the operator Deployments, which will trigger the restart of the operator Pods using the images specified in the new CSV. Note While it is possible to make changes to a given CSV and have those changes propagate to the relevant resource, when upgrading to a new CSV all custom changes will be lost, as the new CSV will be created based on its unaltered spec. 5.3. Operator Reconciliation At this point, the reconciliation of the OpenShift Data Foundation operands proceeds as defined in the OpenShift Data Foundation installation overview . The operators will ensure that all relevant resources exist in their expected configurations as specified in the user-facing resources (for example, StorageCluster ). | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_upgrade_overview |
Chapter 10. Virtual machines | Chapter 10. Virtual machines 10.1. Creating virtual machines Use one of these procedures to create a virtual machine: Quick Start guided tour Quick create from the Catalog Pasting a pre-configured YAML file with the virtual machine wizard Using the CLI Warning Do not create virtual machines in openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix. When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines. Templates without a boot source are labeled as Boot source required . You can use these templates if you complete the steps for adding a boot source to the virtual machine . Important Due to differences in storage behavior, some virtual machine templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for any templates or virtual machines that use data volumes or storage profiles. 10.1.1. Using a Quick Start to create a virtual machine The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process. Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine. Prerequisites Access to the website where you can download the URL link for the operating system image. Procedure In the web console, select Quick Starts from the Help menu. Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine . Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The Virtualization VirtualMachines page displays the virtual machine. 10.1.2. Quick creating a virtual machine You can quickly create a virtual machine (VM) by using a template with an available boot source. Procedure Click Virtualization Catalog in the side menu. Click Boot source available to filter templates with boot sources. Note By default, the template list will show only Default Templates . Click All Items when filtering to see all available templates for your chosen filters. Click a template to view its details. Click Quick Create VirtualMachine to create a VM from the template. The virtual machine Details page is displayed with the provisioning status. Verification Click Events to view a stream of events as the VM is provisioned. Click Console to verify that the VM booted successfully. 10.1.3. Creating a virtual machine from a customized template Some templates require additional parameters, for example, a PVC with a boot source. You can customize select parameters of a template to create a virtual machine (VM). Procedure In the web console, select a template: Click Virtualization Catalog in the side menu. Optional: Filter the templates by project, keyword, operating system, or workload profile. Click the template that you want to customize. Click Customize VirtualMachine . Specify parameters for your VM, including its Name and Disk source . You can optionally specify a data source to clone. Verification Click Events to view a stream of events as the VM is provisioned. Click Console to verify that the VM booted successfully. Refer to the virtual machine fields section when creating a VM from the web console. 10.1.3.1. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 10.1.3.2. Storage fields Name Selection Description Source Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile . Note Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization. To manually specify Volume Mode and Access Mode , you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default. Name Mode description Parameter Parameter description Volume Mode Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem . Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode Access mode of the persistent volume. ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This is required for some features, such as live migration of virtual machines between nodes. ReadOnlyMany (ROX) Volume can be mounted as read only by many nodes. 10.1.3.3. Cloud-init fields Name Description Authorized SSH Keys The user's public key that is copied to ~/.ssh/authorized_keys on the virtual machine. Custom script Replaces other options with a field in which you paste a custom cloud-init script. To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile . 10.1.3.4. Pasting in a pre-configured YAML file to create a virtual machine Create a virtual machine by writing or pasting a YAML configuration file. A valid example virtual machine configuration is provided by default whenever you open the YAML edit screen. If your YAML configuration is invalid when you click Create , an error message indicates the parameter in which the error occurs. Only one error is shown at a time. Note Navigating away from the YAML screen while editing cancels any changes to the configuration you have made. Procedure Click Virtualization VirtualMachines from the side menu. Click Create and select With YAML . Write or paste your virtual machine configuration in the editable window. Alternatively, use the example virtual machine provided by default in the YAML screen. Optional: Click Download to download the YAML configuration file in its present state. Click Create to create the virtual machine. The virtual machine is listed on the VirtualMachines page. 10.1.4. Using the CLI to create a virtual machine You can create a virtual machine from a virtualMachine manifest. Procedure Edit the VirtualMachine manifest for your VM. For example, the following manifest configures a Red Hat Enterprise Linux (RHEL) VM: Example 10.1. Example manifest for a RHEL VM 1 Specify the name of the virtual machine. 2 Specify the password for cloud-user. Create a virtual machine by using the manifest file: USD oc create -f <vm_manifest_file>.yaml Optional: Start the virtual machine: USD virtctl start <vm_name> 10.1.5. Virtual machine storage volume types Storage volume type Description ephemeral A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim . The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. persistentVolumeClaim Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. dataVolume Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready. Specify type: dataVolume or type: "" . If you specify any other value for type , such as persistentVolumeClaim , a warning is displayed, and the virtual machine does not start. cloudInitNoCloud Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. containerDisk References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched. A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines. emptyDisk Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. 10.1.6. About RunStrategies for virtual machines A RunStrategy for virtual machines determines a virtual machine instance's (VMI) behavior, depending on a series of conditions. The spec.runStrategy setting exists in the virtual machine configuration process as an alternative to the spec.running setting. The spec.runStrategy setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running setting with only true or false responses. However, the two settings are mutually exclusive. Only either spec.running or spec.runStrategy can be used. An error occurs if both are used. There are four defined RunStrategies. Always A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as spec.running: true . RerunOnFailure A VMI is re-created if the instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down. Manual The start , stop , and restart virtctl client commands can be used to control the VMI's state and existence. Halted No VMI is present when a virtual machine is created, which is the same behavior as spec.running: false . Different combinations of the start , stop and restart virtctl commands affect which RunStrategy is used. The following table follows a VM's transition from different states. The first column shows the VM's initial RunStrategy . Each additional column shows a virtctl command and the new RunStrategy after that command is run. Initial RunStrategy start stop restart Always - Halted Always RerunOnFailure - Halted RerunOnFailure Manual Manual Manual Manual Halted Always - - Note In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always or RerunOnFailure are rescheduled on a new node. apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template: ... 1 The VMI's current RunStrategy setting. 10.1.7. Additional resources The VirtualMachineSpec definition in the KubeVirt v0.58.0 API Reference provides broader context for the parameters and hierarchy of the virtual machine specification. Note The KubeVirt API Reference is the upstream project reference and might contain parameters that are not supported in OpenShift Virtualization. Enable the CPU Manager to use the high-performance workload profile. See Prepare a container disk before adding it to a virtual machine as a containerDisk volume. See Deploying machine health checks for further details on deploying and enabling machine health checks. See Installer-provisioned infrastructure overview for further details on installer-provisioned infrastructure. Customizing the storage profile 10.2. Editing virtual machines You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen. 10.2.1. Editing a virtual machine in the web console You can edit a virtual machine by using the OpenShift Container Platform web console or the command line interface. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a virtual machine to open the VirtualMachine details page. Click any field that has the pencil icon, which indicates that the field is editable. For example, click the current Boot mode setting, such as BIOS or UEFI, to open the Boot mode window and select an option from the list. Click Save . Note If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 10.2.2. Editing a virtual machine YAML configuration using the web console You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed. Note Navigating away from the YAML screen while editing cancels any changes to the configuration you have made. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine. Click the YAML tab to display the editable configuration. Optional: You can click Download to download the YAML file locally in its current state. Edit the file and click Save . A confirmation message shows that the modification has been successful and includes the updated version number for the object. 10.2.3. Editing a virtual machine YAML configuration using the CLI Use this procedure to edit a virtual machine YAML configuration using the CLI. Prerequisites You configured a virtual machine with a YAML object configuration file. You installed the oc CLI. Procedure Run the following command to update the virtual machine configuration: USD oc edit <object_type> <object_ID> Open the object configuration. Edit the YAML. If you edit a running virtual machine, you need to do one of the following: Restart the virtual machine. Run the following command for the new configuration to take effect: USD oc apply <object_type> <object_ID> 10.2.4. Adding a virtual disk to a virtual machine Use this procedure to add a virtual disk to a virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details screen. Click the Disks tab and then click Add disk . In the Add disk window, specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . Note If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile . 10.2.4.1. Editing CD-ROMs for VirtualMachines Use the following procedure to edit CD-ROMs for virtual machines. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details screen. Click the Disks tab. Click the Options menu for the CD-ROM that you want to edit and select Edit . In the Edit CD-ROM window, edit the fields: Source , Persistent Volume Claim , Name , Type , and Interface . Click Save . 10.2.4.2. Storage fields Name Selection Description Source Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile . Note Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization. To manually specify Volume Mode and Access Mode , you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default. Name Mode description Parameter Parameter description Volume Mode Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem . Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode Access mode of the persistent volume. ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This is required for some features, such as live migration of virtual machines between nodes. ReadOnlyMany (ROX) Volume can be mounted as read only by many nodes. 10.2.5. Adding a network interface to a virtual machine Use this procedure to add a network interface to a virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details screen. Click the Network Interfaces tab. Click Add Network Interface . In the Add Network Interface window, specify the Name , Model , Network , Type , and MAC Address of the network interface. Click Add . Note If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 10.2.5.1. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 10.2.6. Additional resources Customizing the storage profile 10.3. Editing boot order You can update the values for a boot order list by using the web console or the CLI. With Boot Order in the Virtual Machine Overview page, you can: Select a disk or network interface controller (NIC) and add it to the boot order list. Edit the order of the disks or NICs in the boot order list. Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources. 10.3.1. Adding items to a boot order list in the web console Add items to a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine. Add any additional disks or NICs to the boot order list. Click Save . Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 10.3.2. Editing a boot order list in the web console Edit the boot order list in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Choose the appropriate method to move the item in the boot order list: If you do not use a screen reader, hover over the arrow icon to the item that you want to move, drag the item up or down, and drop it in a location of your choice. If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice. Click Save . Note If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 10.3.3. Editing a boot order list in the YAML configuration file Edit the boot order list in a YAML configuration file by using the CLI. Procedure Open the YAML configuration file for the virtual machine by running the following command: USD oc edit vm example Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example: disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default 1 The boot order value specified for the disk. 2 The boot order value specified for the network interface controller. Save the YAML file. Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console. 10.3.4. Removing items from a boot order list in the web console Remove items from a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Click the Remove icon to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 10.4. Deleting virtual machines You can delete a virtual machine from the web console or by using the oc command line interface. 10.4.1. Deleting a virtual machine using the web console Deleting a virtual machine permanently removes it from the cluster. Note When you delete a virtual machine, the data volume it uses is automatically deleted. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu of the virtual machine that you want to delete and select Delete . Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete . In the confirmation pop-up window, click Delete to permanently delete the virtual machine. 10.4.2. Deleting a virtual machine by using the CLI You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines. Note When you delete a virtual machine, the data volume it uses is automatically deleted. Prerequisites Identify the name of the virtual machine that you want to delete. Procedure Delete the virtual machine by running the following command: USD oc delete vm <vm_name> Note This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace. 10.5. Exporting virtual machines You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes. You create a VirtualMachineExport custom resource (CR) by using the command line interface. Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes. 10.5.1. Creating a VirtualMachineExport custom resource You can create a VirtualMachineExport custom resource (CR) to export the following objects: Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM. VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR. PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use. The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route . The export server supports the following file formats: raw : Raw disk image file. gzip : Compressed disk image file. dir : PVC directory and files. tar.gz : Compressed PVC file. Prerequisites The VM must be shut down for a VM export. Procedure Create a VirtualMachineExport manifest to export a volume from a VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml : VirtualMachineExport example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3 1 Specify the appropriate API group: "kubevirt.io" for VirtualMachine . "snapshot.kubevirt.io" for VirtualMachineSnapshot . "" for PersistentVolumeClaim . 2 Specify VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim . 3 Optional. The default duration is 2 hours. Create the VirtualMachineExport CR: USD oc create -f example-export.yaml Get the VirtualMachineExport CR: USD oc get vmexport example-export -o yaml The internal and external links for the exported volumes are displayed in the status stanza: Output example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export 1 External links are accessible from outside the cluster by using an Ingress or Route . 2 Internal links are only valid inside the cluster. 10.6. Managing virtual machine instances If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI). The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port. 10.6.1. About virtual machine instances A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: List standalone VMIs and their details. Edit labels and annotations for a standalone VMI. Delete a standalone VMI. When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects. Note Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. 10.6.2. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 10.6.3. Listing standalone virtual machine instances using the web console Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). Note VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI. Procedure Click Virtualization VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge to its name. 10.6.4. Editing a standalone virtual machine instance using the web console You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a standalone VMI to open the VirtualMachineInstance details page. On the Details tab, click the pencil icon beside Annotations or Labels . Make the relevant changes and click Save . 10.6.5. Deleting a standalone virtual machine instance using the CLI You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI). Prerequisites Identify the name of the VMI that you want to delete. Procedure Delete the VMI by running the following command: USD oc delete vmi <vmi_name> 10.6.6. Deleting a standalone virtual machine instance using the web console Delete a standalone virtual machine instance (VMI) from the web console. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Click Actions Delete VirtualMachineInstance . In the confirmation pop-up window, click Delete to permanently delete the standalone VMI. 10.7. Controlling virtual machine states You can stop, start, restart, and unpause virtual machines from the web console. You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port. 10.7.1. Starting a virtual machine You can start a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to start. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you start it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions . Select Restart . In the confirmation window, click Start to start the virtual machine. Note When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. 10.7.2. Restarting a virtual machine You can restart a running virtual machine from the web console. Important To avoid errors, do not restart a virtual machine while it has a status of Importing . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to restart. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you restart it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Restart . In the confirmation window, click Restart to restart the virtual machine. 10.7.3. Stopping a virtual machine You can stop a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to stop. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row. To view comprehensive information about the selected virtual machine before you stop it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Stop . In the confirmation window, click Stop to stop the virtual machine. 10.7.4. Unpausing a virtual machine You can unpause a paused virtual machine from the web console. Prerequisites At least one of your virtual machines must have a status of Paused . Note You can pause virtual machines by using the virtctl client. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to unpause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: In the Status column, click Paused . To view comprehensive information about the selected virtual machine before you unpause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click the pencil icon that is located on the right side of Status . In the confirmation window, click Unpause to unpause the virtual machine. 10.8. Accessing virtual machine consoles OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the OpenShift Container Platform web console and by using CLI commands. Note Running concurrent VNC connections to a single virtual machine is not currently supported. 10.8.1. Accessing virtual machine consoles in the OpenShift Container Platform web console You can connect to virtual machines by using the serial console or the VNC console in the OpenShift Container Platform web console. You can connect to Windows virtual machines by using the desktop viewer console, which uses RDP (remote desktop protocol), in the OpenShift Container Platform web console. 10.8.1.1. Connecting to the serial console Connect to the serial console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Console tab. The VNC console opens by default. Click Disconnect to ensure that only one console session is open at a time. Otherwise, the VNC console session remains active in the background. Click the VNC Console drop-down list and select Serial Console . Click Disconnect to end the console session. Optional: Open the serial console in a separate window by clicking Open Console in New Window . 10.8.1.2. Connecting to the VNC console Connect to the VNC console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Console tab. The VNC console opens by default. Optional: Open the VNC console in a separate window by clicking Open Console in New Window . Optional: Send key combinations to the virtual machine by clicking Send Key . Click outside the console window and then click Disconnect to end the session. 10.8.1.3. Connecting to a Windows virtual machine with RDP The Desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines. To connect to a Windows virtual machine with RDP, download the console.rdp file for the virtual machine from the Console tab on the VirtualMachine details page of the web console and supply it to your preferred RDP client. Prerequisites A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers. An RDP client installed on a machine on the same network as the Windows virtual machine. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click a Windows virtual machine to open the VirtualMachine details page. Click the Console tab. From the list of consoles, select Desktop viewer . Click Launch Remote Desktop to download the console.rdp file. Reference the console.rdp file in your preferred RDP client to connect to the Windows virtual machine. 10.8.1.4. Switching between virtual machine displays If your Windows virtual machine (VM) has a vGPU attached, you can switch between the default display and the vGPU display by using the web console. Prerequisites The mediated device is configured in the HyperConverged custom resource and assigned to the VM. The VM is running. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines Select a Windows virtual machine to open the Overview screen. Click the Console tab. From the list of consoles, select VNC console . Choose the appropriate key combination from the Send Key list: To access the default VM display, select Ctl + Alt+ 1 . To access the vGPU display, select Ctl + Alt + 2 . Additional resources Configuring mediated devices 10.8.1.5. Copying the SSH command using the web console Copy the command to connect to a virtual machine (VM) terminal via SSH. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu for your virtual machine and select Copy SSH command . Paste it in the terminal to access the VM. 10.8.2. Accessing virtual machine consoles by using CLI commands 10.8.2.1. Accessing a virtual machine via SSH by using virtctl You can use the virtctl ssh command to forward SSH traffic to a virtual machine (VM) by using your local SSH client. If you have previously configured SSH key authentication with the VM, skip to step 2 of the procedure because step 1 is not required. Note Heavy SSH traffic on the control plane can slow down the API server. If you regularly need a large number of connections, use a dedicated Kubernetes Service object to access the virtual machine. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the virtctl client. The virtual machine you want to access is running. You are in the same project as the VM. Procedure Configure SSH key authentication: Use the ssh-keygen command to generate an SSH public key pair: USD ssh-keygen -f <key_file> 1 1 Specify the file in which to store the keys. Create an SSH authentication secret which contains the SSH public key to access the VM: USD oc create secret generic my-pub-key --from-file=key1=<key_file>.pub Add a reference to the secret in the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: testvm spec: running: true template: spec: accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key 1 propagationMethod: configDrive: {} 2 # ... 1 Reference to the SSH authentication Secret object. 2 The SSH public key is injected into the VM as cloud-init metadata using the configDrive provider. Restart the VM to apply your changes. Connect to the VM via SSH: Run the following command to access the VM via SSH: USD virtctl ssh -i <key_file> <vm_username>@<vm_name> Optional: To securely transfer files to or from the VM, use the following commands: Copy a file from your machine to the VM USD virtctl scp -i <key_file> <filename> <vm_username>@<vm_name>: Copy a file from the VM to your machine USD virtctl scp -i <key_file> <vm_username@<vm_name>:<filename> . Additional resources Creating a service to expose a virtual machine Understanding secrets 10.8.2.2. Using OpenSSH and virtctl port-forward You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs. This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server. Prerequisites You have installed the virtctl client. The virtual machine you want to access is running. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Add the following text to the ~/.ssh/config file on your client machine: Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p Connect to the VM by running the following command: USD ssh <user>@vm/<vm_name>.<namespace> 10.8.2.3. Accessing the serial console of a virtual machine instance The virtctl console command opens a serial console to the specified virtual machine instance. Prerequisites The virt-viewer package must be installed. The virtual machine instance you want to access must be running. Procedure Connect to the serial console with virtctl : USD virtctl console <VMI> 10.8.2.4. Accessing the graphical console of a virtual machine instances with VNC The virtctl client utility can use the remote-viewer function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer package. Prerequisites The virt-viewer package must be installed. The virtual machine instance you want to access must be running. Note If you use virtctl via SSH on a remote machine, you must forward the X session to your machine. Procedure Connect to the graphical interface with the virtctl utility: USD virtctl vnc <VMI> If the command failed, try using the -v flag to collect troubleshooting information: USD virtctl vnc <VMI> -v 4 10.8.2.5. Connecting to a Windows virtual machine with an RDP console Create a Kubernetes Service object to connect to a Windows virtual machine (VM) by using your local Remote Desktop Protocol (RDP) client. Prerequisites A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent object is included in the VirtIO drivers. An RDP client installed on your local machine. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add the label special: key in the spec.template.metadata.labels section. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: rdpservice 1 namespace: example-namespace 2 spec: ports: - targetPort: 3389 3 protocol: TCP selector: special: key 4 type: NodePort 5 # ... 1 The name of the Service object. 2 The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest. 3 The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest. 4 The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest. 5 The type of service. Save the Service manifest file. Create the service by running the following command: USD oc create -f <service_name>.yaml Start the VM. If the VM is already running, restart it. Query the Service object to verify that it is available: USD oc get service -n example-namespace Example output for NodePort service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rdpservice NodePort 172.30.232.73 <none> 3389:30000/TCP 5m Run the following command to obtain the IP address for the node: USD oc get node <node_name> -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.24.0 192.168.55.101 <none> Specify the node IP address and the assigned port in your preferred RDP client. Enter the user name and password to connect to the Windows virtual machine. 10.9. Automating Windows installation with sysprep You can use Microsoft DVD images and sysprep to automate the installation, setup, and software provisioning of Windows virtual machines. 10.9.1. Using a Windows DVD to create a VM disk image Microsoft does not provide disk images for download, but you can create a disk image using a Windows DVD. This disk image can then be used to create virtual machines. Procedure In the OpenShift Virtualization web console, click Storage PersistentVolumeClaims Create PersistentVolumeClaim With Data upload form . Select the intended project. Set the Persistent Volume Claim Name . Upload the VM disk image from the Windows DVD. The image is now available as a boot source to create a new Windows VM. 10.9.2. Using a disk image to install Windows You can use a disk image to install Windows on your virtual machine. Prerequisites You must create a disk image using a Windows DVD. You must create an autounattend.xml answer file. See the Microsoft documentation for details. Procedure In the OpenShift Container Platform console, click Virtualization Catalog from the side menu. Select a Windows template and click Customize VirtualMachine . Select Upload (Upload a new file to a PVC) from the Disk source list and browse to the DVD image. Click Review and create VirtualMachine . Clear Clone available operating system source to this Virtual Machine . Clear Start this VirtualMachine after creation . On the Sysprep section of the Scripts tab, click Edit . Browse to the autounattend.xml answer file and click Save . Click Create VirtualMachine . On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save . The VM will start with the sysprep disk containing the autounattend.xml answer file. 10.9.3. Generalizing a Windows VM using sysprep Generalizing an image allows that image to remove all system-specific configuration data when the image is deployed on a virtual machine (VM). Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines . Select a Windows VM to open the VirtualMachine details page. Click the Disks tab. Click the Options menu for the sysprep disk and select Detach . Click Detach . Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool. Start the sysprep program by running the following command: %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs. You can now specialize the VM. 10.9.4. Specializing a Windows virtual machine Specializing a virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM. Prerequisites You must have a generalized Windows disk image. You must create an unattend.xml answer file. See the Microsoft documentation for details. Procedure In the OpenShift Container Platform console, click Virtualization Catalog . Select a Windows template and click Customize VirtualMachine . Select PVC (clone PVC) from the Disk source list. Specify the Persistent Volume Claim project and Persistent Volume Claim name of the generalized Windows image. Click Review and create VirtualMachine . Click the Scripts tab. In the Sysprep section, click Edit , browse to the unattend.xml answer file, and click Save . Click Create VirtualMachine . During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use. 10.9.5. Additional resources Creating virtual machines Microsoft, Sysprep (Generalize) a Windows installation Microsoft, generalize Microsoft, specialize 10.10. Triggering virtual machine failover by resolving a failed node If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object. Note If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks: Failed nodes are automatically recycled. Virtual machines with RunStrategy set to Always or RerunOnFailure are automatically scheduled on healthy nodes. 10.10.1. Prerequisites A node where a virtual machine was running has the NotReady condition . The virtual machine that was running on the failed node has RunStrategy set to Always . You have installed the OpenShift CLI ( oc ). 10.10.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 10.10.3. Verifying virtual machine failover After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI. 10.10.3.1. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 10.11. Installing the QEMU guest agent on virtual machines The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks. 10.11.1. Installing QEMU guest agent on a Linux virtual machine The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Access the virtual machine command line through one of the consoles or by SSH. Install the QEMU guest agent on the virtual machine: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent 10.11.2. Installing QEMU guest agent on a Windows virtual machine For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 10.11.2.1. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 10.11.2.2. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 10.12. Viewing the QEMU guest agent information for virtual machines When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks. 10.12.1. Prerequisites Install the QEMU guest agent on the virtual machine. 10.12.2. About the QEMU guest agent information in the web console When the QEMU guest agent is installed, the Overview and Details tabs on the VirtualMachine details page displays information about the hostname, operating system, time zone, and logged in users. The VirtualMachine details page shows information about the guest operating system installed on the virtual machine. The Details tab displays a table with information for logged in users. The Disks tab displays a table with information for file systems. Note If the QEMU guest agent is not installed, the Overview and the Details tabs display information about the operating system that was specified when the virtual machine was created. 10.12.3. Viewing the QEMU guest agent information in the web console You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine name to open the VirtualMachine details page. Click the Details tab to view active users. Click the Disks tab to view information about the file systems. 10.13. Managing config maps, secrets, and service accounts in virtual machines You can use secrets, config maps, and service accounts to pass configuration data to virtual machines. For example, you can: Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine. Store non-confidential configuration data in a config map so that a pod or another object can consume the data. Allow a component to access the API server by associating a service account with that component. Note OpenShift Virtualization exposes secrets, config maps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead. 10.13.1. Adding a secret, config map, or service account to a virtual machine You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console. These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk. If the virtual machine is running, changes will not take effect until you restart the virtual machine. The newly added resources are marked as pending changes for both the Environment and Disks tab in the Pending Changes banner at the top of the page. Prerequisites The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. In the Environment tab, click Add Config Map, Secret or Service Account . Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource. Optional: Click Reload to revert the environment to its last saved state. Click Save . Verification On the VirtualMachine details page, click the Disks tab and verify that the secret, config map, or service account is included in the list of disks. Restart the virtual machine by clicking Actions Restart . You can now mount the secret, config map, or service account as you would mount any other disk. 10.13.2. Removing a secret, config map, or service account from a virtual machine Remove a secret, config map, or service account from a virtual machine by using the OpenShift Container Platform web console. Prerequisites You must have at least one secret, config map, or service account that is attached to a virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Environment tab. Find the item that you want to delete in the list, and click Remove on the right side of the item. Click Save . Note You can reset the form to the last saved state by clicking Reload . Verification On the VirtualMachine details page, click the Disks tab. Check to ensure that the secret, config map, or service account that you removed is no longer included in the list of disks. 10.13.3. Additional resources Providing sensitive data to pods Understanding and creating service accounts Understanding config maps 10.14. Installing VirtIO driver on an existing Windows virtual machine 10.14.1. About VirtIO drivers VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog . The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine. See also: Installing Virtio drivers on a new Windows virtual machine . 10.14.2. Supported VirtIO drivers for Microsoft Windows virtual machines Table 10.1. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes displays as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 10.14.3. Adding VirtIO drivers container disk to a virtual machine OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog . To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file. Prerequisites Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog . This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time. Procedure Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster. spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration. The disk is available once the virtual machine has started: If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect. If the virtual machine is not running, use virtctl start <vm> . After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive. 10.14.4. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 10.14.5. Removing the VirtIO container disk from a virtual machine After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file. Procedure Edit the configuration file and remove the disk and the volume . USD oc edit vm <vm-name> spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk Reboot the virtual machine for the changes to take effect. 10.15. Installing VirtIO driver on a new Windows virtual machine 10.15.1. Prerequisites Windows installation media accessible by the virtual machine, such as importing an ISO into a data volume and attaching it to the virtual machine. 10.15.2. About VirtIO drivers VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog . The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine. See also: Installing VirtIO driver on an existing Windows virtual machine . 10.15.3. Supported VirtIO drivers for Microsoft Windows virtual machines Table 10.2. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes displays as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 10.15.4. Adding VirtIO drivers container disk to a virtual machine OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog . To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file. Prerequisites Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog . This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time. Procedure Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster. spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration. The disk is available once the virtual machine has started: If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect. If the virtual machine is not running, use virtctl start <vm> . After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive. 10.15.5. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 10.15.6. Removing the VirtIO container disk from a virtual machine After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file. Procedure Edit the configuration file and remove the disk and the volume . USD oc edit vm <vm-name> spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk Reboot the virtual machine for the changes to take effect. 10.16. Using virtual Trusted Platform Module devices Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest. 10.16.1. About vTPM devices A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one. vTPM devices also protect virtual machines by temporarily storing secrets without physical hardware. However, using vTPM for persistent secret storage is not currently supported. vTPM discards stored secrets after a VM shuts down. 10.16.2. Adding a vTPM device to a virtual machine Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also temporarily stores secrets for that VM. Procedure Run the following command to update the VM configuration: USD oc edit vm <vm_name> Edit the VM spec so that it includes the tpm: {} line. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: {} 1 ... 1 Adds the TPM device to the VM. To apply your changes, save and exit the editor. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 10.17. Managing virtual machines with OpenShift Pipelines Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container. The Tekton Tasks Operator (TTO) integrates OpenShift Virtualization with OpenShift Pipelines. TTO includes cluster tasks and example pipelines that allow you to: Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes Run commands in VMs Manipulate disk images with libguestfs tools Important Managing virtual machines with Red Hat OpenShift Pipelines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 10.17.1. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed OpenShift Pipelines . 10.17.2. Deploying the Tekton Tasks Operator resources The Tekton Tasks Operator (TTO) cluster tasks and example pipelines are not deployed by default when you install OpenShift Virtualization. To deploy TTO resources, enable the deployTektonTaskResources feature gate in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Set the spec.featureGates.deployTektonTaskResources field to true . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: kubevirt-hyperconverged spec: tektonPipelinesNamespace: <user_namespace> 1 featureGates: deployTektonTaskResources: true 2 #... 1 The namespace where the pipelines are to be run. 2 The feature gate to be enabled to deploy TTO resources. Note The cluster tasks and example pipelines remain available even if you disable the feature gate later. Save your changes and exit the editor. 10.17.3. Virtual machine tasks supported by the Tekton Tasks Operator The following table shows the cluster tasks that are included as part of the Tekton Tasks Operator. Table 10.3. Virtual machine tasks supported by the Tekton Tasks Operator Task Description create-vm-from-template Create a virtual machine from a template. copy-template Copy a virtual machine template. modify-vm-template Modify a virtual machine template. modify-data-object Create or delete data volumes or data sources. cleanup-vm Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. disk-virt-customize Use the virt-customize tool to run a customization script on a target PVC. disk-virt-sysprep Use the virt-sysprep tool to run a sysprep script on a target PVC. wait-for-vmi-status Wait for a specific status of a virtual machine instance and fail or succeed based on the status. 10.17.4. Example pipelines The Tekton Tasks Operator includes the following example Pipeline manifests. You can run the example pipelines by using the web console or CLI. Windows 10 installer pipeline This pipeline installs Windows 10 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process. Windows 10 customize pipeline This pipeline clones the data volume of a basic Windows 10 installation, customizes it by installing Microsoft SQL Server Express, and then creates a new image and template. 10.17.4.1. Running the example pipelines using the web console You can run the example pipelines from the Pipelines menu in the web console. Procedure Click Pipelines Pipelines in the side menu. Select a pipeline to open the Pipeline details page. From the Actions list, select Start . The Start Pipeline dialog is displayed. Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status. 10.17.4.2. Running the example pipelines using the CLI Use a PipelineRun resource to run the example pipelines. A PipelineRun object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun object for each task in the pipeline. Procedure To run the Windows 10 installer pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> 1 pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task status: {} 1 Specify the URL for the Windows 10 64-bit ISO file. The product language must be English (United States). Apply the PipelineRun manifest: USD oc apply -f windows10-installer-run.yaml To run the Windows 10 customize pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden taskServiceAccountName: modify-vm-template-task status: {} Apply the PipelineRun manifest: USD oc apply -f windows10-customize-run.yaml 10.17.5. Additional resources Creating CI/CD solutions for applications using Red Hat OpenShift Pipelines 10.18. Advanced virtual machine management 10.18.1. Working with resource quotas for virtual machines Create and manage resource quotas for virtual machines. 10.18.1.1. Setting resource quota limits for virtual machines Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests. Procedure Set limits for a VM by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1 1 This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value. Save the VirtualMachine manifest. 10.18.1.2. Additional resources Resource quotas per project Resource quotas across multiple projects 10.18.2. Specifying nodes for virtual machines You can place virtual machines (VMs) on specific nodes by using node placement rules. 10.18.2.1. About node placement for virtual machines To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: You have several VMs. To ensure fault tolerance, you want them to run on different nodes. You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. Your VMs require specific hardware features that are not present on all available nodes. You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities. Note Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes. You can use the following rule types in the spec field of a VirtualMachine manifest: nodeSelector Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object. Note Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met. tolerations Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint. 10.18.2.2. Node placement examples The following example YAML file snippets use nodePlacement , affinity , and tolerations fields to customize node placement for virtual machines. 10.18.2.2.1. Example: VM node placement with nodeSelector In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels. Warning If there are no nodes that fit this description, the virtual machine is not scheduled. Example VM manifest metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 ... 10.18.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1 . If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2 . However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 10.18.2.2.3. Example: VM node placement with node affinity In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2 . The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value . However, if all candidate nodes have this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 10.18.2.2.4. Example: VM node placement with tolerations In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations , it can schedule onto the tainted nodes. Note A virtual machine that tolerates a taint is not required to schedule onto a node with that taint. Example VM manifest metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" ... 10.18.2.3. Additional resources Specifying nodes for virtualization components Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints 10.18.3. Configuring certificate rotation Configure certificate rotation parameters to replace existing certificates. 10.18.3.1. Configuring certificate rotation You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3 1 The value of ca.renewBefore must be less than or equal to the value of ca.duration . 2 The value of server.duration must be less than or equal to the value of ca.duration . 3 The value of server.renewBefore must be less than or equal to the value of server.duration . Apply the YAML file to your cluster. 10.18.3.2. Troubleshooting certificate rotation parameters Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions: The value of ca.renewBefore must be less than or equal to the value of ca.duration . The value of server.duration must be less than or equal to the value of ca.duration . The value of server.renewBefore must be less than or equal to the value of server.duration . If the default values conflict with these conditions, you will receive an error. If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration , conflicting with the specified conditions. Example certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s This results in the following error message: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration The error message only mentions the first conflict. Review all certConfig values before you proceed. 10.18.4. Using UEFI mode for virtual machines You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode. 10.18.4.1. About UEFI mode for virtual machines Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. 10.18.4.2. Booting virtual machines in UEFI mode You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest. Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode: Booting in UEFI mode with secure boot active apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 ... 1 OpenShift Virtualization requires System Management Mode ( SMM ) to be enabled for Secure Boot in UEFI mode to occur. 2 OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot. Apply the manifest to your cluster by running the following command: USD oc create -f <file_name>.yaml 10.18.5. Configuring PXE booting for virtual machines PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host. 10.18.5.1. Prerequisites A Linux bridge must be connected . The PXE server must be connected to the same VLAN as the bridge. 10.18.5.2. PXE booting with a specified MAC address As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. Prerequisites A Linux bridge must be connected. The PXE server must be connected to the same VLAN as the bridge. Procedure Configure a PXE network on the cluster: Create the network attachment definition file for PXE network pxe-net-conf : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ "cniVersion": "0.3.1", "name": "pxe-net-conf", "plugins": [ { "type": "cnv-bridge", "bridge": "br1", "vlan": 1 1 }, { "type": "cnv-tuning" 2 } ] }' 1 Optional: The VLAN tag. 2 The cnv-tuning plugin provides support for custom MAC addresses. Note The virtual machine instance will be attached to the bridge br1 through an access port with the requested VLAN. Create the network attachment definition by using the file you created in the step: USD oc create -f pxe-net-conf.yaml Edit the virtual machine instance configuration file to include the details of the interface and network. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net> : interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 Note Boot order is global for interfaces and disks. Assign a boot device number to the disk to ensure proper booting after operating system provisioning. Set the disk bootOrder value to 2 : devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf> : networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf Create the virtual machine instance: USD oc create -f vmi-pxe-boot.yaml Example output virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created Wait for the virtual machine instance to run: USD oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running View the virtual machine instance using VNC: USD virtctl vnc vmi-pxe-boot Watch the boot screen to verify that the PXE boot is successful. Log in to the virtual machine instance: USD virtctl console vmi-pxe-boot Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0 , got an IP address from OpenShift Container Platform. USD ip addr Example output ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff 10.18.5.3. OpenShift Virtualization networking glossary OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus a "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Node network configuration policy (NNCP) a description of the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Preboot eXecution Environment (PXE) an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client. 10.18.6. Using huge pages with virtual machines You can use huge pages as backing memory for virtual machines in your cluster. 10.18.6.1. Prerequisites Nodes must have pre-allocated huge pages configured . 10.18.6.2. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages. 10.18.6.3. Configuring huge pages for virtual machines You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration. The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi . Note The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance. If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect. Prerequisites Nodes must have pre-allocated huge pages configured. Procedure In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain . The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi : kind: VirtualMachine ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 ... 1 The total amount of memory requested for the virtual machine. This value must be divisible by the page size. 2 The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi . The page size must be smaller than the requested memory. Apply the virtual machine configuration: USD oc apply -f <virtual_machine>.yaml 10.18.7. Enabling dedicated resources for virtual machines To improve performance, you can dedicate node resources, such as CPU, to a virtual machine. 10.18.7.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 10.18.7.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. The virtual machine must be powered off. 10.18.7.3. Enabling dedicated resources for a virtual machine You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. On the Scheduling tab, click the pencil icon beside Dedicated Resources . Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 10.18.8. Scheduling virtual machines You can schedule a virtual machine (VM) on a node by ensuring that the VM's CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node. 10.18.8.1. Policy attributes You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. Policy attribute Description force The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM's CPU. require Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM's CPU or the hypervisor must be able to emulate the supported CPU model. optional The VM is added to a node if that VM is supported by the host's physical machine CPU. disable The VM cannot be scheduled with CPU node discovery. forbid The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. 10.18.8.2. Setting a policy attribute and CPU feature You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. Procedure Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM): apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2 1 Name of the CPU feature for the VM. 2 Policy attribute for the VM. 10.18.8.3. Scheduling virtual machines with the supported CPU model You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. Procedure Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1 1 CPU model for the VM. 10.18.8.4. Scheduling virtual machines with the host model When the CPU model for a virtual machine (VM) is set to host-model , the VM inherits the CPU model of the node where it is scheduled. Procedure Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine: apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1 1 The VM that inherits the CPU model of the node where it is scheduled. 10.18.9. Configuring PCI passthrough The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system. Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI). 10.18.9.1. About preparing a host device for PCI passthrough To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator. To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR. 10.18.9.1.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites Administrative privilege to a working OpenShift Container Platform cluster. Intel or AMD CPU hardware. Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 10.18.9.1.2. Binding PCI devices to the VFIO driver To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. Prerequisites You added kernel arguments to enable IOMMU for the CPU. Procedure Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device. USD lspci -nnv | grep -i nvidia Example output 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) Create a Butane config file, 100-worker-vfiopci.bu , binding the PCI device to the VFIO driver. Note See "Creating machine configs with Butane" for information about Butane. Example variant: openshift version: 4.12.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci 1 Applies the new kernel argument only to worker nodes. 2 Specify the previously determined vendor-ID value ( 10de ) and the device-ID value ( 1eb8 ) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. 3 The file that loads the vfio-pci kernel module on the worker nodes. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml , containing the configuration to be delivered to the worker nodes: USD butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml Apply the MachineConfig object to the worker nodes: USD oc apply -f 100-worker-vfiopci.yaml Verify that the MachineConfig object was added. USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s Verification Verify that the VFIO driver is loaded. USD lspci -nnk -d 10de: The output confirms that the VFIO driver is being used. Example output 10.18.9.1.3. Exposing PCI host devices in the cluster using the CLI To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 ... 1 The host devices that are permitted to be used in the cluster. 2 The list of PCI devices available on the node. 3 The vendor-ID and the device-ID required to identify the PCI device. 4 The name of a PCI host device. 5 Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin. Note The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization. Save your changes and exit the editor. Verification Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100 , nvidia.com/TU104GL_Tesla_T4 , and intel.com/qat resource names. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 10.18.9.1.4. Removing PCI host devices from the cluster using the CLI To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector , resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted. Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" ... Save your changes and exit the editor. Verification Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 10.18.9.2. Configuring virtual machines for PCI passthrough After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines. 10.18.9.2.1. Assigning a PCI device to a virtual machine When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough. Procedure Assign the PCI device to a virtual machine as a host device. Example apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1 1 The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device. Verification Use the following command to verify that the host device is available from the virtual machine. USD lspci -nnk | grep NVIDIA Example output USD 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) 10.18.9.3. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS Managing file permissions Post-installation machine configuration tasks 10.18.10. Configuring vGPU passthrough Your virtual machines can access a virtual GPU (vGPU) hardware. Assigning a vGPU to your virtual machine allows you do the following: Access a fraction of the underlying hardware's GPU to achieve high performance benefits in your virtual machine. Streamline resource-intensive I/O operations. Important vGPU passthrough can only be assigned to devices that are connected to clusters running in a bare metal environment. 10.18.10.1. Assigning vGPU passthrough devices to a virtual machine Use the OpenShift Container Platform web console to assign vGPU passthrough devices to your virtual machine. Prerequisites The virtual machine must be stopped. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Select the virtual machine to which you want to assign the device. On the Details tab, click GPU devices . If you add a vGPU device as a host device, you cannot access the device with the VNC console. Click Add GPU device , enter the Name and select the device from the Device name list. Click Save . Click the YAML tab to verify that the new devices have been added to your cluster configuration in the hostDevices section. Note You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems, such as Windows 10 or RHEL 7. To display resources that are connected to your cluster, click Compute Hardware Devices from the side menu. 10.18.10.2. Additional resources Creating virtual machines Creating virtual machine templates 10.18.11. Configuring mediated devices OpenShift Virtualization automatically creates mediated devices, such as virtual GPUs (vGPUs), if you provide a list of devices in the HyperConverged custom resource (CR). Important Declarative configuration of mediated devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 10.18.11.1. About using the NVIDIA GPU Operator The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks related to bootstrapping GPU nodes. Since the GPU is a special resource in the cluster, you must install some components before deploying application workloads onto the GPU. These components include the NVIDIA drivers which enables compute unified device architecture (CUDA), Kubernetes device plugin, container runtime and others such as automatic node labelling, monitoring and more. Note The NVIDIA GPU Operator is supported only by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA . There are two ways to enable GPUs with OpenShift Container Platform OpenShift Virtualization: the OpenShift Container Platform-native way described here and by using the NVIDIA GPU Operator. The NVIDIA GPU Operator is a Kubernetes Operator that enables OpenShift Container Platform OpenShift Virtualization to expose GPUs to virtualized workloads running on OpenShift Container Platform. It allows users to easily provision and manage GPU-enabled virtual machines, providing them with the ability to run complex artificial intelligence/machine learning (AI/ML) workloads on the same platform as their other workloads. It also provides an easy way to scale the GPU capacity of their infrastructure, allowing for rapid growth of GPU-based workloads. For more information about using the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated VMs, see NVIDIA GPU Operator with OpenShift Virtualization . 10.18.11.2. About using virtual GPUs with OpenShift Virtualization Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters. Note Refer to your hardware vendor's documentation for functionality and support details. Mediated device A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests. 10.18.11.2.1. Prerequisites If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices. If you use NVIDIA cards, you installed the NVIDIA GRID driver . 10.18.11.2.2. Configuration overview When configuring mediated devices, an administrator must complete the following tasks: Create the mediated devices. Expose the mediated devices to the cluster. The HyperConverged CR includes APIs that accomplish both tasks. Creating mediated devices ... spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDevicesTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value> ... 1 Required: Configures global settings for the cluster. 2 Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDevicesTypes configuration. 3 Required if you use nodeMediatedDeviceTypes . Overrides the global mediatedDevicesTypes configuration for the specified nodes. 4 Required if you use nodeMediatedDeviceTypes . Must include a key:value pair. Exposing mediated devices to the cluster ... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2 ... 1 Exposes the mediated devices that map to this value on the host. Note You can see the mediated device types that your device supports by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name , substituting the correct values for your system. For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q . Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type. 2 The resourceName should match that allocated on the node. Find the resourceName by using the following command: USD oc get USDNODE -o json \ | jq '.status.allocatable \ | with_entries(select(.key | startswith("nvidia.com/"))) \ | with_entries(select(.value != "0"))' 10.18.11.2.3. How vGPUs are assigned to nodes For each physical device, OpenShift Virtualization configures the following values: A single mdev type. The maximum number of instances of the selected mdev type. The cluster architecture affects how devices are created and assigned to nodes. Large cluster with multiple cards per node On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example: ... mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 ... In this scenario, each node has two cards, both of which support the following vGPU types: nvidia-105 ... nvidia-108 nvidia-217 nvidia-299 ... On each node, OpenShift Virtualization creates the following vGPUs: 16 vGPUs of type nvidia-105 on the first card. 2 vGPUs of type nvidia-108 on the second card. One node has a single card that supports more than one requested vGPU type OpenShift Virtualization uses the supported type that comes first on the mediatedDevicesTypes list. For example, the card on a node card supports nvidia-223 and nvidia-224 . The following mediatedDevicesTypes list is configured: ... mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-22 - nvidia-223 - nvidia-224 ... In this example, OpenShift Virtualization uses the nvidia-223 type. 10.18.11.2.4. About changing and removing mediated devices The cluster's mediated device configuration can be updated with OpenShift Virtualization by: Editing the HyperConverged CR and change the contents of the mediatedDevicesTypes stanza. Changing the node labels that match the nodeMediatedDeviceTypes node selector. Removing the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Note If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas. Depending on the specific changes, these actions cause OpenShift Virtualization to reconfigure mediated devices or remove them from the cluster nodes. 10.18.11.2.5. Preparing hosts for mediated devices You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices. 10.18.11.2.5.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites Administrative privilege to a working OpenShift Container Platform cluster. Intel or AMD CPU hardware. Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 10.18.11.2.6. Adding and removing mediated devices You can add or remove mediated devices. 10.18.11.2.6.1. Creating and exposing mediated devices You can expose and create mediated devices such as virtual GPUs (vGPUs) by editing the HyperConverged custom resource (CR). Prerequisites You enabled the IOMMU (Input-Output Memory Management Unit) driver. Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the mediated device information to the HyperConverged CR spec , ensuring that you include the mediatedDevicesConfiguration and permittedHostDevices stanzas. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: <.> mediatedDevicesTypes: <.> - nvidia-231 nodeMediatedDeviceTypes: <.> - mediatedDevicesTypes: <.> - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: <.> mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q ... <.> Creates mediated devices. <.> Required: Global mediatedDevicesTypes configuration. <.> Optional: Overrides the global configuration for specific nodes. <.> Required if you use nodeMediatedDeviceTypes . <.> Exposes mediated devices to the cluster. Save your changes and exit the editor. Verification You can verify that a device was added to a specific node by running the following command: USD oc describe node <node_name> 10.18.11.2.6.2. Removing mediated devices from the cluster using the CLI To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q 1 To remove the nvidia-231 device type, delete it from the mediatedDevicesTypes array. 2 To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field. Save your changes and exit the editor. 10.18.11.3. Using mediated devices A vGPU is a type of mediated device; the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines. 10.18.11.3.1. Assigning a mediated device to a virtual machine Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines. Prerequisites The mediated device is configured in the HyperConverged custom resource. Procedure Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest: Example virtual machine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-1Q name: gpu2 1 The resource name associated with the mediated device. 2 A name to identify the device on the VM. Verification To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest: USD lspci -nnk | grep <device_name> 10.18.11.4. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS 10.18.12. Configuring a watchdog Expose a watchdog by configuring the virtual machine (VM) for a watchdog device, installing the watchdog, and starting the watchdog service. 10.18.12.1. Prerequisites The virtual machine must have kernel support for an i6300esb watchdog device. Red Hat Enterprise Linux (RHEL) images support i6300esb . 10.18.12.2. Defining a watchdog device Define how the watchdog proceeds when the operating system (OS) no longer responds. Table 10.4. Available actions poweroff The virtual machine (VM) powers down immediately. If spec.running is set to true , or spec.runStrategy is not set to manual , then the VM reboots. reset The VM reboots in place and the guest OS cannot react. Because the length of time required for the guest OS to reboot can cause liveness probes to timeout, use of this option is discouraged. This timeout can extend the time it takes the VM to reboot if cluster-level protections notice the liveness probe failed and forcibly reschedule it. shutdown The VM gracefully powers down by stopping all services. Procedure Create a YAML file with the following contents: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 ... 1 Specify the watchdog action ( poweroff , reset , or shutdown ). The example above configures the i6300esb watchdog device on a RHEL8 VM with the poweroff action and exposes the device as /dev/watchdog . This device can now be used by the watchdog binary. Apply the YAML file to your cluster by running the following command: USD oc apply -f <file_name>.yaml Important This procedure is provided for testing watchdog functionality only and must not be run on production machines. Run the following command to verify that the VM is connected to the watchdog device: USD lspci | grep watchdog -i Run one of the following commands to confirm the watchdog is active: Trigger a kernel panic: # echo c > /proc/sysrq-trigger Terminate the watchdog service: # pkill -9 watchdog 10.18.12.3. Installing a watchdog device Install the watchdog package on your virtual machine and start the watchdog service. Procedure As a root user, install the watchdog package and dependencies: # yum install watchdog Uncomment the following line in the /etc/watchdog.conf file, and save the changes: #watchdog-device = /dev/watchdog Enable the watchdog service to start on boot: # systemctl enable --now watchdog.service 10.18.12.4. Additional resources Monitoring application health by using health checks 10.18.13. Automatic importing and updating of pre-defined boot sources You can use boot sources that are system-defined and included with OpenShift Virtualization or user-defined , which you create. System-defined boot source imports and updates are controlled by the product feature gate. You can enable, disable, or re-enable updates using the feature gate. User-defined boot sources are not controlled by the product feature gate and must be individually managed to opt in or opt out of automatic imports and updates. Important As of version 4.10, OpenShift Virtualization automatically imports and updates boot sources, unless you manually opt out or do not set a default storage class. If you upgrade to version 4.10, you must manually enable automatic imports and updates for boot sources from version 4.9 or earlier. 10.18.13.1. Enabling automatic boot source updates If you have boot sources from OpenShift Virtualization 4.9 or earlier, you must manually turn on automatic updates for these boot sources. All boot sources in OpenShift Virtualization 4.10 and later are automatically updated by default. To enable automatic boot source imports and updates, set the cdi.kubevirt.io/dataImportCron field to true for each boot source you want to update automatically. Procedure To turn on automatic updates for a boot source, use the following command to apply the dataImportCron label to the data source: USD oc label --overwrite DataSource rhel8 -n openshift-virtualization-os-images cdi.kubevirt.io/dataImportCron=true 1 1 Specifying true turns on automatic updates for the rhel8 boot source. 10.18.13.2. Disabling automatic boot source updates Disabling automatic boot source imports and updates can be helpful to reduce the number of logs in disconnected environments or to reduce resource usage. To disable automatic boot source imports and updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged custom resource (CR) to false . Note User-defined boot sources are not affected by this setting. Procedure Use the following command to disable automatic boot source updates: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", \ "value": false}]' 10.18.13.3. Re-enabling automatic boot source updates If you have previously disabled automatic boot source updates, you must manually re-enable the feature. Set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged custom resource (CR) to true . Procedure Use the following command to re-enable automatic updates: USD oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", "value": true}]' 10.18.13.4. Configuring a storage class for user-defined boot source updates You can configure a storage class that allows automatic importing and updating for user-defined boot sources. Procedure Define a new storageClassName by editing the HyperConverged custom resource (CR). apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <appropriate_class_name> ... Set the new default storage class by running the following commands: USD oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' USD oc patch storageclass <appropriate_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 10.18.13.5. Enabling automatic updates for user-defined boot sources OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update user-defined boot sources. You must manually enable automatic imports and updates on a user-defined boot sources by editing the HyperConverged custom resource (CR). Procedure Use the following command to open the HyperConverged CR for editing: USD oc edit -n openshift-cnv HyperConverged Edit the HyperConverged CR, adding the appropriate template and boot source in the dataImportCronTemplates section. For example: Example in CentOS 7 apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" 1 spec: schedule: "0 */12 * * *" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 4 retentionPolicy: "None" 5 1 This annotation is required for storage classes with volumeBindingMode set to WaitForFirstConsumer . 2 Schedule for the job specified in cron format. 3 Use to create a data volume from a registry source. Use the default pod pullMethod and not node pullMethod , which is based on the node docker cache. The node docker cache is useful when a registry image is available via Container.Image , but the CDI importer is not authorized to access it. 4 For the custom image to be detected as an available boot source, the name of the image's managedDataSource must match the name of the template's DataSource , which is found under spec.dataVolumeTemplates.spec.sourceRef.name in the VM template YAML file. 5 Use All to retain data volumes and data sources when the cron job is deleted. Use None to delete data volumes and data sources when the cron job is deleted. 10.18.13.6. Disabling an automatic update for a system-defined or user-defined boot source You can disable automatic imports and updates for a user-defined boot source and for a system-defined boot source. Because system-defined boot sources are not listed by default in the spec.dataImportCronTemplates of the HyperConverged custom resource (CR), you must add the boot source and disable auto imports and updates. Procedure To disable automatic imports and updates for a user-defined boot source, remove the boot source from the spec.dataImportCronTemplates field in the custom resource list. To disable automatic imports and updates for a system-defined boot source: Edit the HyperConverged CR and add the boot source to spec.dataImportCronTemplates . Disable automatic imports and updates by setting the dataimportcrontemplate.kubevirt.io/enable annotation to false . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: false name: rhel8-image-cron ... 10.18.13.7. Verifying the status of a boot source You can verify whether a boot source is system-defined or user-defined. The status section of each boot source listed in the status.dataImportChronTemplates field of the HyperConverged CR indicates the type of boot source. For example, commonTemplate: true indicates a system-defined ( commonTemplate ) boot source and status: {} indicates a user-defined boot source. Procedure Use the oc get command to list the dataImportChronTemplates in the HyperConverged CR. Verify the status of the boot source. Example output ... apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged ... spec: ... status: 1 ... dataImportCronTemplates: 2 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 3 ... - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 4 ... 1 The status field for the HyperConverged CR. 2 The dataImportCronTemplates field, which lists all defined boot sources. 3 Indicates a system-defined boot source. 4 Indicates a user-defined boot source. 10.18.14. Enabling descheduler evictions on virtual machines You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node. Important Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 10.18.14.1. Descheduler profiles Use the Technology Preview DevPreviewLongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load. DevPreviewLongLifecycle This profile balances resource usage between nodes and enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization : evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). 10.18.14.2. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites Cluster administrator privileges. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . Expand the Profiles section and select DevPreviewLongLifecycle . The AffinityAndTaints profile is enabled by default. Important The only profile currently available for OpenShift Virtualization is DevPreviewLongLifecycle . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). 10.18.14.3. Enabling descheduler evictions on a virtual machine (VM) After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR). Prerequisites Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI ( oc ). Ensure that the VM is not running. Procedure Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR: apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true" If you did not already set the DevPreviewLongLifecycle profile in the web console during installation, specify the DevPreviewLongLifecycle in the spec.profile section of the KubeDescheduler object: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1 1 By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . The descheduler is now enabled on the VM. 10.18.14.4. Additional resources Evicting pods using the descheduler 10.19. Importing virtual machines 10.19.1. TLS certificates for data volume imports 10.19.1.1. Adding TLS certificates for authenticating data volume imports TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume. Create the config map by referencing the relative file path for the TLS certificate. Procedure Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace. USD oc get ns Create the config map: USD oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem> 10.19.1.2. Example: Config map created from a TLS certificate The following example is of a config map created from ca.pem TLS certificate. apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> ... -----END CERTIFICATE----- 10.19.2. Importing virtual machine images with data volumes Use the Containerized Data Importer (CDI) to import a virtual machine image into a persistent volume claim (PVC) by using a data volume. You can attach a data volume to a virtual machine for persistent storage. The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry. Important When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details. 10.19.2.1. Prerequisites If the endpoint requires a TLS certificate, the certificate must be included in a config map in the same namespace as the data volume and referenced in the data volume configuration. To import a container disk: You might need to prepare a container disk from a virtual machine image and store it in your container registry before importing it. If the container registry does not have TLS, you must add the registry to the insecureRegistries field of the HyperConverged custom resource before you can import a container disk from it. You might need to define a storage class or prepare CDI scratch space for this operation to complete successfully. 10.19.2.2. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required Note CDI now uses the OpenShift Container Platform cluster-wide proxy configuration . 10.19.2.3. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.19.2.4. Importing a virtual machine image into storage by using a data volume You can import a virtual machine image into storage by using a data volume. The virtual machine image can be hosted at an HTTP or HTTPS endpoint or the image can be built into a container disk and stored in a container registry. You specify the data source for the image in a VirtualMachine configuration file. When the virtual machine is created, the data volume with the virtual machine image is imported into storage. Prerequisites To import a virtual machine image you must have the following: A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz . An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source. To import a container disk, you must have a virtual machine image built into a container disk and stored in a container registry, along with any authentication credentials needed to access the data source. If the virtual machine must communicate with servers that use self-signed certificates or certificates not signed by the system CA bundle, you must create a config map in the same namespace as the data volume. Procedure If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml : apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3 1 Specify the name of the Secret . 2 Specify the Base64-encoded key ID or user name. 3 Specify the Base64-encoded secret key or password. Apply the Secret manifest: USD oc apply -f endpoint-secret.yaml Edit the VirtualMachine manifest, specifying the data source for the virtual machine image you want to import, and save it as vm-fedora-datavolume.yaml : apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 4 secretRef: endpoint-secret 5 certConfigMap: "" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {} 1 Specify the name of the virtual machine. 2 Specify the name of the data volume. 3 Specify http for an HTTP or HTTPS endpoint. Specify registry for a container disk image imported from a registry. 4 Specify the URL or registry endpoint of the virtual machine image you want to import. This example references a virtual machine image at an HTTPS endpoint. An example of a container registry endpoint is url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" . 5 Specify the Secret name if you created a Secret for the data source. 6 Optional: Specify a CA certificate config map. Create the virtual machine: USD oc create -f vm-fedora-datavolume.yaml Note The oc create command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the virtual machine. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the virtual machine has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 10.19.2.5. Additional resources Configure preallocation mode to improve write performance for data volume operations. 10.19.3. Importing virtual machine images into block storage with data volumes You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC). Important When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details. 10.19.3.1. Prerequisites If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 10.19.3.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.19.3.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 10.19.3.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 10.19.3.5. Importing a virtual machine image into block storage by using a data volume You can import a virtual machine image into block storage by using a data volume. You reference the data volume in a VirtualMachine manifest before you create a virtual machine. Prerequisites A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz . An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source. Procedure If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml : apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3 1 Specify the name of the Secret . 2 Specify the Base64-encoded key ID or user name. 3 Specify the Base64-encoded secret key or password. Apply the Secret manifest: USD oc apply -f endpoint-secret.yaml Create a DataVolume manifest, specifying the data source for the virtual machine image and Block for storage.volumeMode . apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi 1 Specify the name of the data volume. 2 Optional: Set the storage class or omit it to accept the cluster default. 3 Specify the HTTP or HTTPS URL of the image to import. 4 Specify the Secret name if you created a Secret for the data source. 5 The volume mode and access mode are detected automatically for known storage provisioners. Otherwise, specify Block . Create the data volume to import the virtual machine image: USD oc create -f import-pv-datavolume.yaml You can reference this data volume in a VirtualMachine manifest before you create a virtual machine. 10.19.3.6. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required Note CDI now uses the OpenShift Container Platform cluster-wide proxy configuration . 10.19.3.7. Additional resources Configure preallocation mode to improve write performance for data volume operations. 10.20. Cloning virtual machines 10.20.1. Enabling user permissions to clone data volumes across namespaces The isolating nature of namespaces means that users cannot by default clone resources between namespaces. To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace. 10.20.1.1. Prerequisites Only a user with the cluster-admin role can create cluster roles. 10.20.1.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.20.1.3. Creating RBAC resources for cloning data volumes Create a new cluster role that enables permissions for all actions for the datavolumes resource. Procedure Create a ClusterRole manifest: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"] 1 Unique name for the cluster role. Create the cluster role in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the ClusterRole manifest created in the step. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the cluster role created in the step. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io 1 Unique name for the role binding. 2 The namespace for the source data volume. 3 The namespace to which the data volume is cloned. 4 The name of the cluster role created in the step. Create the role binding in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the RoleBinding manifest created in the step. 10.20.2. Cloning a virtual machine disk into a new data volume You can clone the persistent volume claim (PVC) of a virtual machine disk into a new data volume by referencing the source PVC in your data volume configuration file. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 10.20.2.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 10.20.2.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.20.2.3. Cloning the persistent volume claim of a virtual machine disk into a new data volume You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine. Note When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). Procedure Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, and the size of the new data volume. For example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 10.20.2.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 10.20.3. Cloning a virtual machine by using a data volume template You can create a new virtual machine by cloning the persistent volume claim (PVC) of an existing VM. By including a dataVolumeTemplate in your virtual machine configuration file, you create a new data volume from the original PVC. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 10.20.3.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 10.20.3.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.20.3.3. Creating a new virtual machine from a cloned persistent volume claim by using a data volume template You can create a virtual machine that clones the persistent volume claim (PVC) of an existing virtual machine into a data volume. Reference a dataVolumeTemplate in the virtual machine manifest and the source PVC is cloned to a data volume, which is then automatically used for the creation of the virtual machine. Note When a data volume is created as part of the data volume template of a virtual machine, the lifecycle of the data volume is then dependent on the virtual machine. If the virtual machine is deleted, the data volume and associated PVC are also deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). Procedure Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a VirtualMachine object. The following virtual machine example clones my-favorite-vm-disk , which is located in the source-namespace namespace. The 2Gi data volume called favorite-clone is created from my-favorite-vm-disk . For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: "source-namespace" name: "my-favorite-vm-disk" 1 The virtual machine to create. Create the virtual machine with the PVC-cloned data volume: USD oc create -f <vm-clone-datavolumetemplate>.yaml 10.20.3.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 10.20.4. Cloning a virtual machine disk into a new block storage data volume You can clone the persistent volume claim (PVC) of a virtual machine disk into a new block data volume by referencing the source PVC in your data volume configuration file. Warning Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem . However, you can only clone between different volume modes if they are of the contentType: kubevirt . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . 10.20.4.1. Prerequisites Users need additional permissions to clone the PVC of a virtual machine disk into another namespace. 10.20.4.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.20.4.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 10.20.4.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 10.20.4.5. Cloning the persistent volume claim of a virtual machine disk into a new data volume You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine. Note When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted. Prerequisites Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it. Install the OpenShift CLI ( oc ). At least one available block persistent volume (PV) that is the same size as or larger than the source PVC. Procedure Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, volumeMode: Block so that an available block PV is used, and the size of the new data volume. For example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. 5 Specifies that the destination is a block PV Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 10.20.4.6. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 10.21. Virtual machine networking 10.21.1. Configuring the virtual machine for the default pod network You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode Note Traffic on the virtual Network Interface Cards (vNICs) that are attached to the default pod network is interrupted during live migration. 10.21.1.1. Configuring masquerade mode from the command line You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file. Prerequisites The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP. Procedure Edit the interfaces spec of your virtual machine configuration file: kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {} 1 Connect using masquerade mode. 2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . Note Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. Create the virtual machine: USD oc create -f <vm-name>.yaml 10.21.1.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120 . You can edit this value based on your network requirements. When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine. Prerequisites The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack. Procedure In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 ... interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4 1 Connect using masquerade mode. 2 Allows incoming traffic on port 80 to the virtual machine. 3 The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . 4 The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . Create the virtual machine in the namespace: USD oc create -f example-vm-ipv6.yaml Verification To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address: USD oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}" 10.21.2. Creating a service to expose a virtual machine You can expose a virtual machine within the cluster or outside the cluster by using a Service object. 10.21.2.1. About services A Kubernetes service is an abstract way to expose an application running on a set of pods as a network service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a spec.type in the Service object: ClusterIP Exposes the service on an internal IP address within the cluster. ClusterIP is the default service type . NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a service accessible from outside the cluster. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator. Additional resources Installing the MetalLB Operator Configuring services to use MetalLB 10.21.2.1.1. Dual-stack support If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object. The spec.ipFamilyPolicy field can be set to one of the following values: SingleStack The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. PreferDualStack The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. RequireDualStack This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack . The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges. You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values: [IPv4] [IPv6] [IPv4, IPv6] [IPv6, IPv4] 10.21.2.2. Exposing a virtual machine as a service Create a ClusterIP , NodePort , or LoadBalancer service to connect to a running virtual machine (VM) from within or outside the cluster. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add the label special: key in the spec.template.metadata.labels section. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7 1 The name of the Service object. 2 The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest. 3 Optional: Specifies how the nodes distribute service traffic that is received on external IP addresses. This only applies to NodePort and LoadBalancer service types. The default value is Cluster which routes traffic evenly to all cluster endpoints. 4 Optional: When set, the nodePort value must be unique across all services. If not specified, a value in the range above 30000 is dynamically allocated. 5 Optional: The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest. If targetPort is not specified, it takes the same value as port . 6 The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest. 7 The type of service. Possible values are ClusterIP , NodePort and LoadBalancer . Save the Service manifest file. Create the service by running the following command: USD oc create -f <service_name>.yaml Start the VM. If the VM is already running, restart it. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace Example output for ClusterIP service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m Example output for NodePort service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m Example output for LoadBalancer service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s Choose the appropriate method to connect to the virtual machine: For a ClusterIP service, connect to the VM from within the cluster by using the service IP address and the service port. For example: USD ssh [email protected] -p 27017 For a NodePort service, connect to the VM by specifying the node IP address and the node port outside the cluster network. For example: USD ssh fedora@USDNODE_IP -p 30000 For a LoadBalancer service, use the vinagre client to connect to your virtual machine by using the public IP address and port. External ports are dynamically allocated. 10.21.2.3. Additional resources Configuring ingress cluster traffic using a NodePort Configuring ingress cluster traffic using a load balancer 10.21.3. Connecting a virtual machine to a Linux bridge network By default, OpenShift Virtualization is installed with a single, internal pod network. You must create a Linux bridge network attachment definition (NAD) in order to connect to additional networks. To attach a virtual machine to an additional network: Create a Linux bridge node network configuration policy. Create a Linux bridge network attachment definition. Configure the virtual machine, enabling the virtual machine to recognize the network attachment definition. For more information about scheduling, interface types, and other node networking activities, see the node networking section. 10.21.3.1. Connecting to the network through the network attachment definition 10.21.3.1.1. Creating a Linux bridge node network configuration policy Use a NodeNetworkConfigurationPolicy manifest YAML file to create the Linux bridge. Prerequisites You have installed the Kubernetes NMState Operator. Procedure Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8 1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached. 10.21.3.2. Creating a Linux bridge network attachment definition Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. 10.21.3.2.1. Creating a Linux bridge network attachment definition in the web console Network administrators can create network attachment definitions to provide layer-2 networking to pods and virtual machines. Procedure In the web console, click Networking Network Attachment Definitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Click the Network Type list and select CNV Linux bridge . Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. 10.21.3.2.2. Creating a Linux bridge network attachment definition in the CLI As a network administrator, you can configure a network attachment definition of type cnv-bridge to provide layer-2 networking to pods and virtual machines. Prerequisites The node must support nftables and the nft binary must be deployed to enable MAC spoof check. Procedure Create a network attachment definition in the same namespace as the virtual machine. Add the virtual machine to the network attachment definition, as in the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ "cniVersion": "0.3.1", "name": "<bridge-network>", 3 "type": "cnv-bridge", 4 "bridge": "<bridge-interface>", 5 "macspoofchk": true, 6 "vlan": 100, 7 "preserveDefaultVlan": false 8 }' 1 The name for the NetworkAttachmentDefinition object. 2 Optional: Annotation key-value pair for node selection, where bridge-interface must match the name of a bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the bridge-interface bridge connected. 3 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 4 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI. 5 The name of the Linux bridge configured on the node. 6 Optional: Flag to enable MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. 7 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 8 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Create the network attachment definition: USD oc create -f <network-attachment-definition.yaml> 1 1 Where <network-attachment-definition.yaml> is the file name of the network attachment definition manifest. Verification Verify that the network attachment definition was created by running the following command: USD oc get network-attachment-definition <bridge-network> 10.21.3.3. Configuring the virtual machine for a Linux bridge network 10.21.3.3.1. Creating a NIC for a virtual machine in the web console Create and attach additional NICs to a virtual machine from the web console. Prerequisites A network attachment definition must be available. Procedure In the correct project in the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Network Interfaces tab to view the NICs already attached to the virtual machine. Click Add Network Interface to create a new slot in the list. Select a network attachment definition from the Network list for the additional network. Fill in the Name , Model , Type , and MAC Address for the new NIC. Click Save to save and attach the NIC to the virtual machine. 10.21.3.3.2. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 10.21.3.3.3. Attaching a virtual machine to an additional network in the CLI Attach a virtual machine to an additional network by adding a bridge interface and specifying a network attachment definition in the virtual machine configuration. This procedure uses a YAML file to demonstrate editing the configuration and applying the updated file to the cluster. You can alternatively use the oc edit <object> <name> command to edit an existing virtual machine. Prerequisites Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect. Procedure Create or edit a configuration of a virtual machine that you want to connect to the bridge network. Add the bridge interface to the spec.template.spec.domain.devices.interfaces list and the network attachment definition to the spec.template.spec.networks list. This example adds a bridge interface called bridge-net that connects to the a-bridge-network network attachment definition: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 ... networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3 ... 1 The name of the bridge interface. 2 The name of the network. This value must match the name value of the corresponding spec.template.spec.domain.devices.interfaces entry. 3 The name of the network attachment definition, prefixed by the namespace where it exists. The namespace must be either the default namespace or the same namespace where the VM is to be created. In this case, multus is used. Multus is a cloud network interface (CNI) plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Apply the configuration: USD oc apply -f <example-vm.yaml> Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 10.21.4. Connecting a virtual machine to an SR-IOV network You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps: Configure an SR-IOV network device. Configure an SR-IOV network. Connect the VM to the SR-IOV network. 10.21.4.1. Prerequisites You must have enabled global SR-IOV and VT-d settings in the firmware for the host . You must have installed the SR-IOV Network Operator . 10.21.4.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14 1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name. 4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0 and 99 . A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99 . The default value is 99 . 6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 8 The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device. 9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3 . 10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b , 1015 , 1017 . 11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1 . 13 The vfio-pci driver type is required for virtual functions in OpenShift Virtualization. 14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false . The default value is false . Note If isRDMA flag is set to true , you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' 10.21.4.3. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: "<trust_vf>" 11 capabilities: <capabilities> 12 1 Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. 5 Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 7 Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable , disable and auto . 8 Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF. 9 Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 10 Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0 . 11 Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 12 Optional: Replace <capabilities> with the capabilities to configure for this network. To create the object, enter the following command. Replace <name> with a name for this additional network. USD oc create -f <name>-sriov-network.yaml Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 10.21.4.4. Connecting a virtual machine to an SR-IOV network You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration. Procedure Include the SR-IOV network details in the spec.domain.devices.interfaces and spec.networks of the VM configuration: kind: VirtualMachine ... spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6 ... 1 A unique name for the interface that is connected to the pod network. 2 The masquerade binding to the default pod network. 3 A unique name for the SR-IOV interface. 4 The name of the pod network interface. This must be the same as the interfaces.name that you defined earlier. 5 The name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier. 6 The name of the SR-IOV network attachment definition. Apply the virtual machine configuration: USD oc apply -f <vm-sriov.yaml> 1 1 The name of the virtual machine YAML file. 10.21.5. Connecting a virtual machine to a service mesh OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4. 10.21.5.1. Prerequisites You must have installed the Service Mesh Operator and deployed the service mesh control plane . You must have added the namespace where the virtual machine is created to the service mesh member roll . You must use the masquerade binding method for the default pod network. 10.21.5.2. Configuring a virtual machine for the service mesh To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true . Then expose your VM as a service to view your application in the mesh. Prerequisites To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090. Procedure Edit the VM configuration file to add the sidecar.istio.io/inject: "true" annotation. Example configuration file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: "true" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk 1 The key/value pair (label) that must be matched to the service selector attribute. 2 The annotation to enable automatic sidecar injection. 3 The binding method (masquerade mode) for use with the default pod network. Apply the VM configuration: USD oc apply -f <vm_name>.yaml 1 1 The name of the virtual machine YAML file. Create a Service object to expose your VM to the service mesh. apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP 1 The service selector that determines the set of pods targeted by a service. This attribute corresponds to the spec.metadata.labels field in the VM configuration file. In the above example, the Service object named vm-istio targets TCP port 8080 on any pod with the label app=vm-istio . Create the service: USD oc create -f <service_name>.yaml 1 1 The name of the service YAML file. 10.21.6. Configuring IP addresses for virtual machines You can configure static and dynamic IP addresses for virtual machines. 10.21.6.1. Configuring an IP address for a new virtual machine using cloud-init You can use cloud-init to configure the IP address of a secondary NIC when you create a virtual machine (VM). The IP address can be dynamically or statically provisioned. Note If the VM is connected to the pod network, the pod network interface is the default route unless you update it. Prerequisites The virtual machine is connected to a secondary network. You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine. Procedure Edit the spec.template.spec.volumes.cloudInitNoCloud.networkData stanza of the virtual machine configuration: To configure a dynamic IP address, specify the interface name and enable DHCP: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 1 Specify the interface name. To configure a static IP, specify the interface name and the IP address: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2 1 Specify the interface name. 2 Specify the static IP address. 10.21.7. Viewing the IP address of NICs on a virtual machine You can view the IP address for a network interface controller (NIC) by using the web console or the oc client. The QEMU guest agent displays additional information about the virtual machine's secondary networks. 10.21.7.1. Prerequisites Install the QEMU guest agent on the virtual machine. 10.21.7.2. Viewing the IP address of a virtual machine interface in the CLI The network interface configuration is included in the oc describe vmi <vmi_name> command. You can also view the IP address information by running ip addr on the virtual machine, or by running oc get vmi <vmi_name> -o yaml . Procedure Use the oc describe command to display the virtual machine interface configuration: USD oc describe vmi <vmi_name> Example output ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa 10.21.7.3. Viewing the IP address of a virtual machine interface in the web console The IP information is displayed on the VirtualMachine details page for the virtual machine. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine name to open the VirtualMachine details page. The information for each attached NIC is displayed under IP Address on the Details tab. 10.21.8. Using a MAC address pool for virtual machines The KubeMacPool component provides a MAC address pool service for virtual machine NICs in a namespace. 10.21.8.1. About KubeMacPool KubeMacPool provides a MAC address pool per namespace and allocates MAC addresses for virtual machine NICs from the pool. This ensures that the NIC is assigned a unique MAC address that does not conflict with the MAC address of another virtual machine. Virtual machine instances created from that virtual machine retain the assigned MAC address across reboots. Note KubeMacPool does not handle virtual machine instances created independently from a virtual machine. KubeMacPool is enabled by default when you install OpenShift Virtualization. You can disable a MAC address pool for a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. Re-enable KubeMacPool for the namespace by removing the label. 10.21.8.2. Disabling a MAC address pool for a namespace in the CLI Disable a MAC address pool for virtual machines in a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. Procedure Add the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. The following example disables KubeMacPool for two namespaces, <namespace1> and <namespace2> : USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore 10.21.8.3. Re-enabling a MAC address pool for a namespace in the CLI If you disabled KubeMacPool for a namespace and want to re-enable it, remove the mutatevirtualmachines.kubemacpool.io=ignore label from the namespace. Note Earlier versions of OpenShift Virtualization used the label mutatevirtualmachines.kubemacpool.io=allocate to enable KubeMacPool for a namespace. This is still supported but redundant as KubeMacPool is now enabled by default. Procedure Remove the KubeMacPool label from the namespace. The following example re-enables KubeMacPool for two namespaces, <namespace1> and <namespace2> : USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io- 10.22. Virtual machine disks 10.22.1. Storage features Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization. 10.22.1.1. OpenShift Virtualization storage feature matrix Table 10.5. OpenShift Virtualization storage feature matrix Virtual machine live migration Host-assisted virtual machine disk cloning Storage-assisted virtual machine disk cloning Virtual machine snapshots OpenShift Data Foundation: RBD block-mode volumes Yes Yes Yes Yes OpenShift Virtualization hostpath provisioner No Yes No No Other multi-node writable storage Yes [1] Yes Yes [2] Yes [2] Other single-node writable storage No Yes Yes [2] Yes [2] PVCs must request a ReadWriteMany access mode. Storage provider must support both Kubernetes and CSI snapshot APIs Note You cannot live migrate virtual machines that use: A storage class with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Do not set the evictionStrategy field to LiveMigrate for these virtual machines. 10.22.2. Configuring local storage for virtual machines You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use the HPP, you must create an HPP custom resource (CR). 10.22.2.1. Creating a hostpath provisioner with a basic storage pool You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools stanza. The storage pool specifies the name and path used by the CSI driver. Prerequisites The directories specified in spec.storagePools.path must have read/write access. The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Procedure Create an hpp_cr.yaml file with a storagePools stanza as in the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: "/var/myvolumes" 2 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array to which you can add multiple entries. 2 Specify the storage pool directories under this node path. Save the file and exit. Create the HPP by running the following command: USD oc create -f hpp_cr.yaml 10.22.2.1.1. About creating storage classes When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools stanza. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. 10.22.2.1.2. Creating a storage class for the CSI driver with the storagePools stanza You create a storage class custom resource (CR) for the hostpath provisioner (HPP) CSI driver. Procedure Create a storageclass_csi.yaml file to define the storage class: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3 1 The two possible reclaimPolicy values are Delete and Retain . If you do not specify a value, the default value is Delete . 2 The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. 3 Specify the name of the storage pool defined in the HPP CR. Save the file and exit. Create the StorageClass object by running the following command: USD oc create -f storageclass_csi.yaml 10.22.2.2. About storage pools created with PVC templates If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR). A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation. The PVC template is based on the spec stanza of the PersistentVolumeClaim object: Example PersistentVolumeClaim object apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 1 This value is only required for block volume mode PVs. You define a storage pool using a pvcTemplate specification in the HPP CR. The Operator creates a PVC from the pvcTemplate specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes. You can combine basic storage pools with storage pools created from PVC templates. 10.22.2.2.1. Creating a storage pool with a PVC template You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR). Prerequisites The directories specified in spec.storagePools.path must have read/write access. The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Procedure Create an hpp_pvc_template_pool.yaml file for the HPP CR that specifies a persistent volume (PVC) template in the storagePools stanza according to the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: "/var/myvolumes" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array that can contain both basic and PVC template storage pools. 2 Specify the storage pool directories under this node path. 3 Optional: The volumeMode parameter can be either Block or Filesystem as long as it matches the provisioned volume format. If no value is specified, the default is Filesystem . If the volumeMode is Block , the mounting pod creates an XFS file system on the block volume before mounting it. 4 If the storageClassName parameter is omitted, the default storage class is used to create PVCs. If you omit storageClassName , ensure that the HPP storage class is not the default storage class. 5 You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request. Save the file and exit. Create the HPP with a storage pool by running the following command: USD oc create -f hpp_pvc_template_pool.yaml Additional resources Customizing the storage profile 10.22.3. Creating data volumes You can create a data volume by using either the PVC or storage API. Important When using OpenShift Virtualization with OpenShift Container Platform Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block . Tip Whenever possible, use the storage API to optimize space allocation and maximize performance. A storage profile is a custom resource that the CDI manages. It provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class. Storage profiles enable you to create data volumes quickly while reducing coding and minimizing potential errors. For recognized storage types, the CDI provides values that optimize the creation of PVCs. However, you can configure automatic settings for a storage class if you customize the storage profile. 10.22.3.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.22.3.2. Creating data volumes using the storage API When you create a data volume using the storage API, the Containerized Data Interface (CDI) optimizes your persistent volume claim (PVC) allocation based on the type of storage supported by your selected storage class. You only have to specify the data volume name, namespace, and the amount of storage that you want to allocate. For example: When using Ceph RBD, accessModes is automatically set to ReadWriteMany , which enables live migration. volumeMode is set to Block to maximize performance. When you are using volumeMode: Filesystem , more space will automatically be requested by the CDI, if required to accommodate file system overhead. In the following YAML, using the storage API requests a data volume with two gigabytes of usable space. The user does not need to know the volumeMode in order to correctly estimate the required persistent volume claim (PVC) size. The CDI chooses the optimal combination of accessModes and volumeMode attributes automatically. These optimal values are based on the type of storage or the defaults that you define in your storage profile. If you want to provide custom values, they override the system-calculated values. Example DataVolume definition apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7 1 The name of the new data volume. 2 Indicate that the source of the import is an existing persistent volume claim (PVC). 3 The namespace where the source PVC exists. 4 The name of the source PVC. 5 Indicates allocation using the storage API. 6 Specifies the amount of available space that you request for the PVC. 7 Optional: The name of the storage class. If the storage class is not specified, the system default storage class is used. 10.22.3.3. Creating data volumes using the PVC API When you create a data volume using the PVC API, the Containerized Data Interface (CDI) creates the data volume based on what you specify for the following fields: accessModes ( ReadWriteOnce , ReadWriteMany , or ReadOnlyMany ) volumeMode ( Filesystem or Block ) capacity of storage ( 5Gi , for example) In the following YAML, using the PVC API allocates a data volume with a storage capacity of two gigabytes. You specify an access mode of ReadWriteMany to enable live migration. Because you know the values your system can support, you specify Block storage instead of the default, Filesystem . Example DataVolume definition apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9 1 The name of the new data volume. 2 In the source section, pvc indicates that the source of the import is an existing persistent volume claim (PVC). 3 The namespace where the source PVC exists. 4 The name of the source PVC. 5 Indicates allocation using the PVC API. 6 accessModes is required when using the PVC API. 7 Specifies the amount of space you are requesting for your data volume. 8 Specifies that the destination is a block PVC. 9 Optionally, specify the storage class. If the storage class is not specified, the system default storage class is used. Important When you explicitly allocate a data volume by using the PVC API and you are not using volumeMode: Block , consider file system overhead. File system overhead is the amount of space required by the file system to maintain its metadata. The amount of space required for file system metadata is file system dependent. Failing to account for file system overhead in your storage capacity request can result in an underlying persistent volume claim (PVC) that is not large enough to accommodate your virtual machine disk. If you use the storage API, the CDI will factor in file system overhead and request a larger persistent volume claim (PVC) to ensure that your allocation request is successful. 10.22.3.4. Customizing the storage profile You can specify default parameters by editing the StorageProfile object for the provisioner's storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume object. Prerequisites Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail. Note An empty status section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by the CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations. Warning If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created. Procedure Edit the storage profile. In this example, the provisioner is not recognized by CDI: USD oc edit -n openshift-cnv storageprofile <storage_class> Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> Provide the needed attribute values in the storage profile: Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. After you save your changes, the selected values appear in the storage profile status element. 10.22.3.4.1. Setting a default cloning strategy using a storage profile You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy . Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. Cloning strategies can be specified by setting the cloneStrategy attribute in a storage profile to one of these values: snapshot - This method is used by default when snapshots are configured. This cloning strategy uses a temporary volume snapshot to clone the volume. The storage provisioner must support CSI snapshots. copy - This method uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning. csi-clone - This method uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike snapshot or copy , which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in the StorageProfile object for the provisioner's storage class. Note You can also set clone strategies using the CLI without modifying the default claimPropertySets in your YAML spec section. Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. 3 The default cloning method of your choice. In this example, CSI volume cloning is specified. 10.22.3.5. Additional resources About creating storage classes Overriding the default file system overhead value Cloning a data volume using smart cloning 10.22.4. Reserving PVC space for file system overhead By default, the OpenShift Virtualization reserves space for file system overhead data in persistent volume claims (PVCs) that use the Filesystem volume mode. You can set the percentage to reserve space for this purpose globally and for specific storage classes. 10.22.4.1. How file system overhead affects space for virtual machine disks When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem volume mode, you must ensure that there is enough space on the PVC for: The virtual machine disk. The space reserved for file system overhead, such as metadata By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount. You can configure a different overhead value by editing the HCO object. You can change the value globally and you can specify values for specific storage classes. 10.22.4.2. Overriding the default file system overhead value Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the spec.filesystemOverhead attribute of the HCO object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Open the HCO object for editing by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the spec.filesystemOverhead fields, populating them with your chosen values: ... spec: filesystemOverhead: global: "<new_global_value>" 1 storageClass: <storage_class_name>: "<new_value_for_this_storage_class>" 2 1 The default file system overhead percentage used for any storage classes that do not already have a set value. For example, global: "0.07" reserves 7% of the PVC for file system overhead. 2 The file system overhead percentage for the specified storage class. For example, mystorageclass: "0.04" changes the default overhead value for PVCs in the mystorageclass storage class to 4%. Save and exit the editor to update the HCO object. Verification View the CDIConfig status and verify your changes by running one of the following commands: To generally verify changes to CDIConfig : USD oc get cdiconfig -o yaml To view your specific changes to CDIConfig : USD oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}' 10.22.5. Configuring CDI to work with namespaces that have a compute resource quota You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions. 10.22.5.1. About CPU and memory quotas in a namespace A resource quota , defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace. The HyperConverged custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0 . This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. 10.22.5.2. Overriding CPU and memory defaults Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads stanza to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Add the spec.resourceRequirements.storageWorkloads stanza to the CR, setting the values based on your use case. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi" Save and exit the editor to update the HyperConverged CR. 10.22.5.3. Additional resources Resource quotas per project 10.22.6. Managing data volume annotations Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods. 10.22.6.1. Example: Data volume annotations This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network annotation causes the pod to use the multus network named bridge-network as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name> annotation. Multus network annotation example apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: "example.exampleurl.com" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi 1 Multus network annotation 10.22.7. Using preallocation for data volumes The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes. 10.22.7.1. About preallocation The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes. If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type: fallocate If the file system supports it, CDI uses the operating system's fallocate call to preallocate space by using the posix_fallocate function, which allocates blocks and marks them as uninitialized. full If fallocate mode cannot be used, full mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed. 10.22.7.2. Enabling preallocation for a data volume You can enable preallocation for specific data volumes by including the spec.preallocation field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI ( oc ). Preallocation mode is supported for all CDI source types. Procedure Specify the spec.preallocation field in the data volume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 ... pvc: ... preallocation: true 2 1 All CDI source types support preallocation, however preallocation is ignored for cloning operations. 2 The preallocation field is a boolean that defaults to false. 10.22.8. Uploading local disk images by using the web console You can upload a locally stored disk image file by using the web console. 10.22.8.1. Prerequisites You must have a virtual machine image file in IMG, ISO, or QCOW2 format. If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 10.22.8.2. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 10.22.8.3. Uploading an image file using the web console Use the web console to upload an image file to a new persistent volume claim (PVC). You can later use this PVC to attach the image to new virtual machines. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . Procedure From the side menu of the web console, click Storage Persistent Volume Claims . Click the Create Persistent Volume Claim drop-down list to expand it. Click With Data Upload Form to open the Upload Data to Persistent Volume Claim page. Click Browse to open the file manager and select the image that you want to upload, or drag the file into the Drag a file here or browse to upload field. Optional: Set this image as the default image for a specific operating system. Select the Attach this data to a virtual machine operating system check box. Select an operating system from the list. The Persistent Volume Claim Name field is automatically filled with a unique name and cannot be edited. Take note of the name assigned to the PVC so that you can identify it later, if necessary. Select a storage class from the Storage Class list. In the Size field, enter the size value for the PVC. Select the corresponding unit of measurement from the drop-down list. Warning The PVC size must be larger than the size of the uncompressed virtual disk. Select an Access Mode that matches the storage class that you selected. Click Upload . 10.22.8.4. Additional resources Configure preallocation mode to improve write performance for data volume operations. 10.22.9. Uploading local disk images by using the virtctl tool You can upload a locally stored disk image to a new or existing data volume by using the virtctl command-line utility. 10.22.9.1. Prerequisites Install virtctl . If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 10.22.9.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.22.9.3. Creating an upload data volume You can manually create a data volume with an upload data source to use for uploading local disk images. Procedure Create a data volume configuration that specifies spec: source: upload{} : apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2 1 The name of the data volume. 2 The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload. Create the data volume by running the following command: USD oc create -f <upload-datavolume>.yaml 10.22.9.4. Uploading a local disk image to a data volume You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure. Note After you upload a local disk image, you can add it to a virtual machine. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . The kubevirt-virtctl package must be installed on the client machine. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Identify the following items: The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically. The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image. The file location of the virtual machine disk image that you want to upload. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the step. For example: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the virtual machine disk image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 10.22.9.5. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 10.22.9.6. Additional resources Configure preallocation mode to improve write performance for data volume operations. 10.22.10. Uploading a local disk image to a block storage data volume You can upload a local disk image into a block data volume by using the virtctl command-line utility. In this workflow, you create a local block device to use as a persistent volume, associate this block volume with an upload data volume, and use virtctl to upload the local disk image into the data volume. 10.22.10.1. Prerequisites Install virtctl . If you require scratch space according to the CDI supported operations matrix , you must first define a storage class or prepare CDI scratch space for this operation to complete successfully. 10.22.10.2. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.22.10.3. About block persistent volumes A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification. 10.22.10.4. Creating a local block persistent volume Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image. Procedure Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks): USD dd if=/dev/zero of=<loop10> bs=100M count=20 Mount the loop10 file as a loop device. USD losetup </dev/loop10>d3 <loop10> 1 2 1 File path where the loop device is mounted. 2 The file created in the step to be mounted as the loop device. Create a PersistentVolume manifest that references the mounted loop device. kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4 1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV. # oc create -f <local-block-pv10.yaml> 1 1 The file name of the persistent volume created in the step. 10.22.10.5. Creating an upload data volume You can manually create a data volume with an upload data source to use for uploading local disk images. Procedure Create a data volume configuration that specifies spec: source: upload{} : apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2 1 The name of the data volume. 2 The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload. Create the data volume by running the following command: USD oc create -f <upload-datavolume>.yaml 10.22.10.6. Uploading a local disk image to a data volume You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure. Note After you upload a local disk image, you can add it to a virtual machine. Prerequisites You must have one of the following: A raw virtual machine image file in either ISO or IMG format. A virtual machine image file in QCOW2 format. For best results, compress your image file according to the following guidelines before you upload it: Compress a raw image file by using xz or gzip . Note Using a compressed raw image file results in the most efficient upload. Compress a QCOW2 image file by using the method that is recommended for your client: If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool. If you use a Windows client, compress the QCOW2 file by using xz or gzip . The kubevirt-virtctl package must be installed on the client machine. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Identify the following items: The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically. The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image. The file location of the virtual machine disk image that you want to upload. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the step. For example: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the virtual machine disk image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 10.22.10.7. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 10.22.10.8. Additional resources Configure preallocation mode to improve write performance for data volume operations. 10.22.11. Managing virtual machine snapshots You can create and delete virtual machine (VM) snapshots for VMs, whether the VMs are powered off (offline) or on (online). You can only restore to a powered off (offline) VM. OpenShift Virtualization supports VM snapshots on the following: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API Online snapshots have a default time deadline of five minutes ( 5m ) that can be changed, if needed. Important Online snapshots are supported for virtual machines that have hot-plugged virtual disks. However, hot-plugged disks that are not in the virtual machine specification are not included in the snapshot. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 10.22.11.1. About virtual machine snapshots A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a development version. A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state). When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken. The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. With the VM snapshots feature, cluster administrators and application developers can: Create a new snapshot List all snapshots attached to a specific VM Restore a VM from a snapshot Delete an existing VM snapshot 10.22.11.1.1. Virtual machine snapshot controller and custom resource definitions (CRDs) The VM snapshot feature introduces three new API objects defined as CRDs for managing snapshots: VirtualMachineSnapshot : Represents a user request to create a snapshot. It contains information about the current state of the VM. VirtualMachineSnapshotContent : Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. VirtualMachineRestore : Represents a user request to restore a VM from a snapshot. The VM snapshot controller binds a VirtualMachineSnapshotContent object with the VirtualMachineSnapshot object for which it was created, with a one-to-one mapping. 10.22.11.2. Installing QEMU guest agent on a Linux virtual machine The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Access the virtual machine command line through one of the consoles or by SSH. Install the QEMU guest agent on the virtual machine: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent 10.22.11.3. Installing QEMU guest agent on a Windows virtual machine For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation. To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 10.22.11.3.1. Installing VirtIO drivers on an existing Windows virtual machine Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine. Note This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps. Procedure Start the virtual machine and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the virtual machine to complete the driver installation. 10.22.11.3.2. Installing VirtIO drivers during Windows installation Install the VirtIO drivers from the attached SATA CD driver during Windows installation. Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Procedure Start the virtual machine and connect to a graphical console. Begin the Windows installation process. Select the Advanced installation. The storage destination will not be recognized until the driver is loaded. Click Load driver . The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Repeat the two steps for all required drivers. Complete the Windows installation. 10.22.11.4. Creating a virtual machine snapshot in the web console You can create a virtual machine (VM) snapshot by using the web console. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. The VM snapshot only includes disks that meet the following requirements: Must be either a data volume or persistent volume claim Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. If the virtual machine is running, click Actions Stop to power it down. Click the Snapshots tab and then click Take Snapshot . Fill in the Snapshot Name and optional Description fields. Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot. If your VM has disks that cannot be included in the snapshot and you still wish to proceed, select the I am aware of this warning and wish to proceed checkbox. Click Save . 10.22.11.5. Creating a virtual machine snapshot in the CLI You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot object. Kubevirt will coordinate with the QEMU guest agent to create a snapshot of the online VM. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM's file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Prerequisites Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots. Install the OpenShift CLI ( oc ). Optional: Power down the VM for which you want to create a snapshot. Procedure Create a YAML file to define a VirtualMachineSnapshot object that specifies the name of the new VirtualMachineSnapshot and the name of the source VM. For example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 1 The name of the new VirtualMachineSnapshot object. 2 The name of the source VM. Create the VirtualMachineSnapshot resource. The snapshot controller creates a VirtualMachineSnapshotContent object, binds it to the VirtualMachineSnapshot and updates the status and readyToUse fields of the VirtualMachineSnapshot object. USD oc create -f <my-vmsnapshot>.yaml Optional: If you are taking an online snapshot, you can use the wait command and monitor the status of the snapshot: Enter the following command: USD oc wait my-vm my-vmsnapshot --for condition=Ready Verify the status of the snapshot: InProgress - The online snapshot operation is still in progress. Succeeded - The online snapshot operation completed successfully. Failed - The online snapshot operaton failed. Note Online snapshots have a default time deadline of five minutes ( 5m ). If the snapshot does not complete successfully in five minutes, the status is set to failed . Afterwards, the file system will be thawed and the VM unfrozen but the status remains failed until you delete the failed snapshot image. To change the default time deadline, add the FailureDeadline attribute to the VM snapshot spec with the time designated in minutes ( m ) or in seconds ( s ) that you want to specify before the snapshot operation times out. To set no deadline, you can specify 0 , though this is generally not recommended, as it can result in an unresponsive VM. If you do not specify a unit of time such as m or s , the default is seconds ( s ). Verification Verify that the VirtualMachineSnapshot object is created and bound with VirtualMachineSnapshotContent . The readyToUse flag must be set to true . USD oc describe vmsnapshot <my-vmsnapshot> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False" 1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True" 2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 1 The status field of the Progressing condition specifies if the snapshot is still being created. 2 The status field of the Ready condition specifies if the snapshot creation process is complete. 3 Specifies if the snapshot is ready to be used. 4 Specifies that the snapshot is bound to a VirtualMachineSnapshotContent object created by the snapshot controller. Check the spec:volumeBackups property of the VirtualMachineSnapshotContent resource to verify that the expected PVCs are included in the snapshot. 10.22.11.6. Verifying online snapshot creation with snapshot indications Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation. Prerequisites To view indications, you must have attempted to create an online VM snapshot using the CLI or the web console. Procedure Display the output from the snapshot indications by doing one of the following: For snapshots created with the CLI, view indicator output in the VirtualMachineSnapshot object YAML, in the status field. For snapshots created using the web console, click VirtualMachineSnapshot > Status in the Snapshot details screen. Verify the status of your online VM snapshot: Online indicates that the VM was running during online snapshot creation. NoGuestAgent indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error. 10.22.11.7. Restoring a virtual machine from a snapshot in the web console You can restore a virtual machine (VM) to a configuration represented by a snapshot in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. If the virtual machine is running, click Actions Stop to power it down. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine. Choose one of the following methods to restore a VM snapshot: For the snapshot that you want to use as the source to restore the VM, click Restore . Select a snapshot to open the Snapshot Details screen and click Actions Restore VirtualMachineSnapshot . In the confirmation pop-up window, click Restore to restore the VM to its configuration represented by the snapshot. 10.22.11.8. Restoring a virtual machine from a snapshot in the CLI You can restore an existing virtual machine (VM) to a configuration by using a VM snapshot. You can only restore from an offline VM snapshot. Prerequisites Install the OpenShift CLI ( oc ). Power down the VM you want to restore to a state. Procedure Create a YAML file to define a VirtualMachineRestore object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source. For example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3 1 The name of the new VirtualMachineRestore object. 2 The name of the target VM you want to restore. 3 The name of the VirtualMachineSnapshot object to be used as the source. Create the VirtualMachineRestore resource. The snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. USD oc create -f <my-vmrestore>.yaml Verification Verify that the VM is restored to the state represented by the snapshot. The complete flag must be set to true . USD oc get vmrestore <my-vmrestore> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" 2 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1 1 Specifies if the process of restoring the VM to the state represented by the snapshot is complete. 2 The status field of the Progressing condition specifies if the VM is still being restored. 3 The status field of the Ready condition specifies if the VM restoration process is complete. 10.22.11.9. Deleting a virtual machine snapshot in the web console You can delete an existing virtual machine snapshot by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine. Click the Options menu of the virtual machine snapshot that you want to delete and select Delete VirtualMachineSnapshot . In the confirmation pop-up window, click Delete to delete the snapshot. 10.22.11.10. Deleting a virtual machine snapshot in the CLI You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Delete the VirtualMachineSnapshot object. The snapshot controller deletes the VirtualMachineSnapshot along with the associated VirtualMachineSnapshotContent object. USD oc delete vmsnapshot <my-vmsnapshot> Verification Verify that the snapshot is deleted and no longer attached to this VM: USD oc get vmsnapshot 10.22.11.11. Additional resources CSI Volume Snapshots 10.22.12. Moving a local virtual machine disk to a different node Virtual machines that use local volume storage can be moved so that they run on a specific node. You might want to move the virtual machine to a specific node for the following reasons: The current node has limitations to the local storage configuration. The new node is better optimized for the workload of that virtual machine. To move a virtual machine that uses local storage, you must clone the underlying volume by using a data volume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new data volume, or add the new data volume to another virtual machine . Tip When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes . Note Users without the cluster-admin role require additional user permissions to clone volumes across namespaces. 10.22.12.1. Cloning a local volume to another node You can move a virtual machine disk so that it runs on a specific node by cloning the underlying persistent volume claim (PVC). To ensure the virtual machine disk is cloned to the correct node, you must either create a new persistent volume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the data volume. Note The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails. Prerequisites The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk. Procedure Either create a new local PV on the node, or identify a local PV already on the node: Create a local PV that includes the nodeAffinity.nodeSelectorTerms parameters. The following manifest creates a 10Gi local PV on node01 . kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem 1 The name of the PV. 2 The size of the PV. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC. 3 The mount path on the node. 4 The name of the node where you want to create the PV. Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the nodeAffinity field in its configuration: USD oc get pv <destination-pv> -o yaml The following snippet shows that the PV is on node01 : Example output ... spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2 ... 1 The kubernetes.io/hostname key uses the node hostname to select a node. 2 The hostname of the node. Add a unique label to the PV: USD oc label pv <destination-pv> node=node01 Create a data volume manifest that references the following: The PVC name and namespace of the virtual machine. The label you applied to the PV in the step. The size of the destination PV. apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: "<source-vm-disk>" 2 namespace: "<source-namespace>" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5 1 The name of the new data volume. 2 The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration: spec.volumes.persistentVolumeClaim.claimName . 3 The namespace where the source PVC exists. 4 The label that you applied to the PV in the step. 5 The size of the destination PV. Start the cloning operation by applying the data volume manifest to your cluster: USD oc apply -f <clone-datavolume.yaml> The data volume clones the PVC of the virtual machine into the PV on the specific node. 10.22.13. Expanding virtual storage by adding blank disk images You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization. 10.22.13.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.22.13.2. Creating a blank disk image with data volumes You can create a new blank disk image in a persistent volume claim by customizing and deploying a data volume configuration file. Prerequisites At least one available persistent volume. Install the OpenShift CLI ( oc ). Procedure Edit the DataVolume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi Create the blank disk image by running the following command: USD oc create -f <blank-image-datavolume>.yaml 10.22.13.3. Additional resources Configure preallocation mode to improve write performance for data volume operations. 10.22.14. Cloning a data volume using smart-cloning Smart-cloning is a built-in feature of Red Hat OpenShift Data Foundation. Smart-cloning is faster and more efficient than host-assisted cloning. You do not need to perform any action to enable smart-cloning, but you need to ensure your storage environment is compatible with smart-cloning to use this feature. When you create a data volume with a persistent volume claim (PVC) source, you automatically initiate the cloning process. You always receive a clone of the data volume if your environment supports smart-cloning or not. However, you will only receive the performance benefits of smart cloning if your storage provider supports smart-cloning. 10.22.14.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.22.14.2. About smart-cloning When a data volume is smart-cloned, the following occurs: A snapshot of the source persistent volume claim (PVC) is created. A PVC is created from the snapshot. The snapshot is deleted. 10.22.14.3. Cloning a data volume Prerequisites For smart-cloning to occur, the following conditions are required: Your storage provider must support snapshots. The source and target PVCs must be defined to the same storage class. The source and target PVCs share the same volumeMode . The VolumeSnapshotClass object must reference the storage class defined to both the source and target PVCs. Procedure To initiate cloning of a data volume: Create a YAML file for a DataVolume object that specifies the name of the new data volume and the name and namespace of the source PVC. In this example, because you specify the storage API, there is no need to specify accessModes or volumeMode . The optimal values will be calculated for you automatically. apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 storage: 4 resources: requests: storage: <2Gi> 5 1 The name of the new data volume. 2 The namespace where the source PVC exists. 3 The name of the source PVC. 4 Specifies allocation with the storage API 5 The size of the new data volume. Start cloning the PVC by creating the data volume: USD oc create -f <cloner-datavolume>.yaml Note Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones. 10.22.14.4. Additional resources Cloning the persistent volume claim of a virtual machine disk into a new data volume Configure preallocation mode to improve write performance for data volume operations. Customizing the storage profile 10.22.15. Creating and using boot sources A boot source contains a bootable operating system (OS) and all of the configuration settings for the OS, such as drivers. You use a boot source to create virtual machine templates with specific configurations. These templates can be used to create any number of available virtual machines. Quick Start tours are available in the OpenShift Container Platform web console to assist you in creating a custom boot source, uploading a boot source, and other tasks. Select Quick Starts from the Help menu to view the Quick Start tours. 10.22.15.1. About virtual machines and boot sources Virtual machines consist of a virtual machine definition and one or more disks that are backed by data volumes. Virtual machine templates enable you to create virtual machines using predefined virtual machine specifications. Every virtual machine template requires a boot source, which is a fully configured virtual machine disk image including configured drivers. Each virtual machine template contains a virtual machine definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. To use the boot sources feature, install the latest release of OpenShift Virtualization. The namespace openshift-virtualization-os-images enables the feature and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates. Define a boot source by using a persistent volume claim (PVC) that is populated by uploading a local file, cloning an existing PVC, importing from a registry, or by URL. Attach a boot source to a virtual machine template by using the web console. After the boot source is attached to a virtual machine template, you create any number of fully configured ready-to-use virtual machines from the template. 10.22.15.2. Importing a RHEL image as a boot source You can import a Red Hat Enterprise Linux (RHEL) image as a boot source by specifying a URL for the image. Prerequisites You must have access to a web page with the operating system image. For example: Download Red Hat Enterprise Linux web page with images. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Identify the RHEL template for which you want to configure a boot source and click Add source . In the Add boot source to template window, select URL (creates PVC) from the Boot source type list. Click RHEL download page to access the Red Hat Customer Portal. A list of available installers and images is displayed on the Download Red Hat Enterprise Linux page. Identify the Red Hat Enterprise Linux KVM guest image that you want to download. Right-click Download Now , and copy the URL for the image. In the Add boot source to template window, paste the URL into the Import URL field, and click Save and import . Verification Verify that the template displays a green checkmark in the Boot source column on the Templates page. You can now use this template to create RHEL virtual machines. 10.22.15.3. Adding a boot source for a virtual machine template A boot source can be configured for any virtual machine template that you want to use for creating virtual machines or custom templates. When virtual machine templates are configured with a boot source, they are labeled Source available on the Templates page. After you add a boot source to a template, you can create a new virtual machine from the template. There are four methods for selecting and adding a boot source in the web console: Upload local file (creates PVC) URL (creates PVC) Clone (creates PVC) Registry (creates PVC) Prerequisites To add a boot source, you must be logged in as a user with the os-images.kubevirt.io:edit RBAC role or as an administrator. You do not need special privileges to create a virtual machine from a template with a boot source added. To upload a local file, the operating system image file must exist on your local machine. To import via URL, access to the web server with the operating system image is required. For example: the Red Hat Enterprise Linux web page with images. To clone an existing PVC, access to the project with a PVC is required. To import via registry, access to the container registry is required. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Click the options menu beside a template and select Edit boot source . Click Add disk . In the Add disk window, select Use this disk as a boot source . Enter the disk name and select a Source , for example, Blank (creates PVC) or Use an existing PVC . Enter a value for Persistent Volume Claim size to specify the PVC size that is adequate for the uncompressed image and any additional space that is required. Select a Type , for example, Disk or CD-ROM . Optional: Click Storage class and select the storage class that is used to create the disk. Typically, this storage class is the default storage class that is created for use by all PVCs. Note Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. Optional: Clear Apply optimized StorageProfile settings to edit the access mode or volume mode. Select the appropriate method to save your boot source: Click Save and upload if you uploaded a local file. Click Save and import if you imported content from a URL or the registry. Click Save and clone if you cloned an existing PVC. Your custom virtual machine template with a boot source is listed on the Catalog page. You can use this template to create a virtual machine. 10.22.15.4. Creating a virtual machine from a template with an attached boot source After you add a boot source to a template, you can create a virtual machine from the template. Procedure In the OpenShift Container Platform web console, click Virtualization Catalog in the side menu. Select the updated template and click Quick create VirtualMachine . The VirtualMachine details is displayed with the status Starting . 10.22.15.5. Additional resources Creating virtual machine templates Automatic importing and updating of pre-defined boot sources 10.22.16. Hot plugging virtual disks You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI). 10.22.16.1. About hot plugging virtual disks When you hot plug a virtual disk, you attach a virtual disk to a virtual machine instance while the virtual machine is running. When you hot unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running. Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot unplugged. You cannot hot plug or hot unplug container disks. After you hot plug a virtual disk, it remains attached until you detach it, even if you restart the virtual machine. 10.22.16.2. About virtio-scsi In OpenShift Virtualization, each virtual machine (VM) has a virtio-scsi controller so that hot plugged disks can use a scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks. Regular virtio is not available for hot plugged disks because it is not scalable: each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. 10.22.16.3. Hot plugging a virtual disk using the CLI Hot plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. Prerequisites You must have a running virtual machine to hot plug a virtual disk. You must have at least one data volume or persistent volume claim (PVC) available for hot plugging. Procedure Hot plug a virtual disk by running the following command: USD virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>] Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances. The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC. 10.22.16.4. Hot unplugging a virtual disk using the CLI Hot unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running. Prerequisites Your virtual machine must be running. You must have at least one data volume or persistent volume claim (PVC) available and hot plugged. Procedure Hot unplug a virtual disk by running the following command: USD virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> 10.22.16.5. Hot plugging a virtual disk using the web console Hot plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. When you hot plug a virtual disk, it remains attached to the VMI until you unplug it. Prerequisites You must have a running virtual machine to hot plug a virtual disk. Procedure Click Virtualization VirtualMachines from the side menu. Select the running virtual machine to which you want to hot plug a virtual disk. On the VirtualMachine details page, click the Disks tab. Click Add disk . In the Add disk (hot plugged) window, fill in the information for the virtual disk that you want to hot plug. Click Save . 10.22.16.6. Hot unplugging a virtual disk using the web console Hot unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running. Prerequisites Your virtual machine must be running with a hot plugged disk attached. Procedure Click Virtualization VirtualMachines from the side menu. Select the running virtual machine with the disk you want to hot unplug to open the VirtualMachine details page. On the Disks tab, click the Options menu of the virtual disk that you want to hot unplug. Click Detach . 10.22.17. Using container disks with virtual machines You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage. Important If you use large container disks, I/O traffic might increase, impacting worker nodes. This can lead to unavailable nodes. You can resolve this by: Pruning DeploymentConfig objects Configuring garbage collection 10.22.17.1. About container disks A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones. A container disk can either be imported into a persistent volume claim (PVC) by using a data volume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk volume. 10.22.17.1.1. Importing a container disk into a PVC by using a data volume Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a data volume. You can then attach the data volume to a virtual machine for persistent storage. 10.22.17.1.2. Attaching a container disk to a virtual machine as a containerDisk volume A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine. Use containerDisk volumes for read-only file systems such as CD-ROMs or for disposable virtual machines. Important Using containerDisk volumes for read-write file systems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly. 10.22.17.2. Preparing a container disk for virtual machines You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a data volume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk volume. The size of a disk image inside a container disk is limited by the maximum layer size of the registry where the container disk is hosted. Note For Red Hat Quay , you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed. Prerequisites Install podman if it is not already installed. The virtual machine image must be either QCOW2 or RAW format. Procedure Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of 107 , and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440 . The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result: USD cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF 1 Where <vm_image> is the virtual machine image in either QCOW2 or RAW format. To use a remote virtual machine image, replace <vm_image>.qcow2 with the complete url for the remote image. Build and tag the container: USD podman build -t <registry>/<container_disk_name>:latest . Push the container image to the registry: USD podman push <registry>/<container_disk_name>:latest If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage. 10.22.17.3. Disabling TLS for a container registry to use as insecure registry You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource. Prerequisites Log in to the cluster as a user with the cluster-admin role. Procedure Edit the HyperConverged custom resource and add a list of insecure registries to the spec.storageImport.insecureRegistries field. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000" 1 Replace the examples in this list with valid registry hostnames. 10.22.17.4. steps Import the container disk into persistent storage for a virtual machine . Create a virtual machine that uses a containerDisk volume for ephemeral storage. 10.22.18. Preparing CDI scratch space 10.22.18.1. About data volumes DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification. Note VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM. 10.22.18.2. About scratch space The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts. You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass field of the HyperConverged custom resource. If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used. Note CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block volume mode, you must define a storage class capable of provisioning file volume mode PVCs. Manual provisioning If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod. 10.22.18.3. CDI operations that require scratch space Type Reason Registry imports CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. Upload image QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. HTTP imports of archived images QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. HTTP imports of authenticated images QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. HTTP imports of custom certificates QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. 10.22.18.4. Defining a storage class You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass field to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Add the spec.scratchSpaceStorageClass field to the CR, setting the value to the name of a storage class that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>" 1 1 If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated. Save and exit your default editor to update the HyperConverged CR. 10.22.18.5. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 10.22.18.6. Additional resources Dynamic provisioning 10.22.19. Re-using persistent volumes To re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used. 10.22.19.1. About reclaiming statically provisioned persistent volumes When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage. You can then re-use the PV configuration to create a PV with a different name. Statically provisioned PVs must have a reclaim policy of Retain to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. 10.22.19.2. Reclaiming statically provisioned persistent volumes Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage. Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage. Procedure Ensure that the reclaim policy of the PV is set to Retain : Check the reclaim policy of the PV: USD oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy' If the persistentVolumeReclaimPolicy is not set to Retain , edit the reclaim policy with the following command: USD oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Ensure that no resources are using the PV: USD oc describe pvc <pvc_name> | grep 'Mounted By:' Remove any resources that use the PVC before continuing. Delete the PVC to release the PV: USD oc delete pvc <pvc_name> Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use spec parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV: USD oc get pv <pv_name> -o yaml > <file_name>.yaml Delete the PV: USD oc delete pv <pv_name> Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder: USD rm -rf <path_to_share_storage> Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the spec parameters of that file as the basis for a new PV manifest: Note To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted. USD oc create -f <new_pv_name>.yaml Additional resources Configuring local storage for virtual machines The OpenShift Container Platform Storage documentation has more information on Persistent Storage . 10.22.20. Expanding a virtual machine disk You can enlarge the size of a virtual machine's (VM) disk to provide a greater storage capacity by resizing the disk's persistent volume claim (PVC). However, you cannot reduce the size of a VM disk. 10.22.20.1. Enlarging a virtual machine disk VM disk enlargement makes extra space available to the virtual machine. However, it is the responsibility of the VM owner to decide how to consume the storage. If the disk is a Filesystem PVC, the matching file expands to the remaining size while reserving some space for file system overhead. Procedure Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand: USD oc edit pvc <pvc_name> Change the value of spec.resource.requests.storage attribute to a larger size. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 ... 1 The VM disk size that can be increased 10.22.20.2. Additional resources Extending a basic volume in Windows . Extending an existing file system partition without destroying data in Red Hat Enterprise Linux . Extending a logical volume and its file system online in Red Hat Enterprise Linux . | [
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 2 chpasswd: { expire: False } name: cloudinitdisk",
"oc create -f <vm_manifest_file>.yaml",
"virtctl start <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template:",
"oc edit <object_type> <object_ID>",
"oc apply <object_type> <object_ID>",
"oc edit vm example",
"disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default",
"oc delete vm <vm_name>",
"apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3",
"oc create -f example-export.yaml",
"oc get vmexport example-export -o yaml",
"apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export",
"oc get vmis -A",
"oc delete vmi <vmi_name>",
"ssh-keygen -f <key_file> 1",
"oc create secret generic my-pub-key --from-file=key1=<key_file>.pub",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: testvm spec: running: true template: spec: accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key 1 propagationMethod: configDrive: {} 2",
"virtctl ssh -i <key_file> <vm_username>@<vm_name>",
"virtctl scp -i <key_file> <filename> <vm_username>@<vm_name>:",
"virtctl scp -i <key_file> <vm_username@<vm_name>:<filename> .",
"Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p",
"ssh <user>@vm/<vm_name>.<namespace>",
"virtctl console <VMI>",
"virtctl vnc <VMI>",
"virtctl vnc <VMI> -v 4",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: rdpservice 1 namespace: example-namespace 2 spec: ports: - targetPort: 3389 3 protocol: TCP selector: special: key 4 type: NodePort 5",
"oc create -f <service_name>.yaml",
"oc get service -n example-namespace",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rdpservice NodePort 172.30.232.73 <none> 3389:30000/TCP 5m",
"oc get node <node_name> -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.24.0 192.168.55.101 <none>",
"%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get vmis -A",
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: {} 1",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: kubevirt-hyperconverged spec: tektonPipelinesNamespace: <user_namespace> 1 featureGates: deployTektonTaskResources: true 2 #",
"apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> 1 pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task status: {}",
"oc apply -f windows10-installer-run.yaml",
"apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden taskServiceAccountName: modify-vm-template-task status: {}",
"oc apply -f windows10-customize-run.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1",
"metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2",
"metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname",
"metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value",
"metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3",
"certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s",
"error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration",
"apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2",
"oc create -f <file_name>.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", \"plugins\": [ { \"type\": \"cnv-bridge\", \"bridge\": \"br1\", \"vlan\": 1 1 }, { \"type\": \"cnv-tuning\" 2 } ] }'",
"oc create -f pxe-net-conf.yaml",
"interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1",
"devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2",
"networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf",
"oc create -f vmi-pxe-boot.yaml",
"virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created",
"oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running",
"virtctl vnc vmi-pxe-boot",
"virtctl console vmi-pxe-boot",
"ip addr",
"3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff",
"kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2",
"oc apply -f <virtual_machine>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1",
"apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3",
"oc create -f 100-worker-kernel-arg-iommu.yaml",
"oc get MachineConfig",
"lspci -nnv | grep -i nvidia",
"02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)",
"variant: openshift version: 4.12.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci",
"butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml",
"oc apply -f 100-worker-vfiopci.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s",
"lspci -nnk -d 10de:",
"04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5",
"oc describe node <node_name>",
"Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"",
"oc describe node <node_name>",
"Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1",
"lspci -nnk | grep NVIDIA",
"02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)",
"spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDevicesTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>",
"permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2",
"oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'",
"mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108",
"nvidia-105 nvidia-108 nvidia-217 nvidia-299",
"mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-22 - nvidia-223 - nvidia-224",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3",
"oc create -f 100-worker-kernel-arg-iommu.yaml",
"oc get MachineConfig",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: <.> mediatedDevicesTypes: <.> - nvidia-231 nodeMediatedDeviceTypes: <.> - mediatedDevicesTypes: <.> - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: <.> mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q",
"oc describe node <node_name>",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-1Q name: gpu2",
"lspci -nnk | grep <device_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1",
"oc apply -f <file_name>.yaml",
"lspci | grep watchdog -i",
"echo c > /proc/sysrq-trigger",
"pkill -9 watchdog",
"yum install watchdog",
"#watchdog-device = /dev/watchdog",
"systemctl enable --now watchdog.service",
"oc label --overwrite DataSource rhel8 -n openshift-virtualization-os-images cdi.kubevirt.io/dataImportCron=true 1",
"oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'",
"oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <appropriate_class_name>",
"oc patch storageclass <current_default_storage_class> -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}'",
"oc patch storageclass <appropriate_storage_class> -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'",
"oc edit -n openshift-cnv HyperConverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 4 retentionPolicy: \"None\" 5",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: false name: rhel8-image-cron",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged spec: status: 1 dataImportCronTemplates: 2 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 3 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 4",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1",
"oc get ns",
"oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>",
"apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> -----END CERTIFICATE-----",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 4 secretRef: endpoint-secret 5 certConfigMap: \"\" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}",
"oc create -f vm-fedora-datavolume.yaml",
"oc get pods",
"oc describe dv fedora-dv 1",
"virtctl console vm-fedora-datavolume",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi",
"oc create -f import-pv-datavolume.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4",
"oc create -f <cloner-datavolume>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: \"source-namespace\" name: \"my-favorite-vm-disk\"",
"oc create -f <vm-clone-datavolumetemplate>.yaml",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5",
"oc create -f <cloner-datavolume>.yaml",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}",
"oc create -f <vm-name>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4",
"oc create -f example-vm-ipv6.yaml",
"oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7",
"oc create -f <service_name>.yaml",
"oc get service -n example-namespace",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s",
"ssh [email protected] -p 27017",
"ssh fedora@USDNODE_IP -p 30000",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"<bridge-network>\", 3 \"type\": \"cnv-bridge\", 4 \"bridge\": \"<bridge-interface>\", 5 \"macspoofchk\": true, 6 \"vlan\": 100, 7 \"preserveDefaultVlan\": false 8 }'",
"oc create -f <network-attachment-definition.yaml> 1",
"oc get network-attachment-definition <bridge-network>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3",
"oc apply -f <example-vm.yaml>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12",
"oc create -f <name>-sriov-network.yaml",
"oc get net-attach-def -n <namespace>",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6",
"oc apply -f <vm-sriov.yaml> 1",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk",
"oc apply -f <vm_name>.yaml 1",
"apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP",
"oc create -f <service_name>.yaml 1",
"kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true",
"kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2",
"oc describe vmi <vmi_name>",
"Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux",
"oc create -f hpp_cr.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3",
"oc create -f storageclass_csi.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux",
"oc create -f hpp_pvc_template_pool.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9",
"oc edit -n openshift-cnv storageprofile <storage_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2",
"oc get cdiconfig -o yaml",
"oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: \"example.exampleurl.com\" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 pvc: preallocation: true 2",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2",
"oc create -f <my-vmsnapshot>.yaml",
"oc wait my-vm my-vmsnapshot --for condition=Ready",
"oc describe vmsnapshot <my-vmsnapshot>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3",
"oc create -f <my-vmrestore>.yaml",
"oc get vmrestore <my-vmrestore>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1",
"oc delete vmsnapshot <my-vmsnapshot>",
"oc get vmsnapshot",
"kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem",
"oc get pv <destination-pv> -o yaml",
"spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2",
"oc label pv <destination-pv> node=node01",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: \"<source-vm-disk>\" 2 namespace: \"<source-namespace>\" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5",
"oc apply -f <clone-datavolume.yaml>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: \"hostpath\" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"oc create -f <blank-image-datavolume>.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 storage: 4 resources: requests: storage: <2Gi> 5",
"oc create -f <cloner-datavolume>.yaml",
"virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]",
"virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>",
"cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF",
"podman build -t <registry>/<container_disk_name>:latest .",
"podman push <registry>/<container_disk_name>:latest",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1",
"oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'",
"oc patch pv <pv_name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc describe pvc <pvc_name> | grep 'Mounted By:'",
"oc delete pvc <pvc_name>",
"oc get pv <pv_name> -o yaml > <file_name>.yaml",
"oc delete pv <pv_name>",
"rm -rf <path_to_share_storage>",
"oc create -f <new_pv_name>.yaml",
"oc edit pvc <pvc_name>",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/virtual-machines |
Chapter 1. Preparing to install on IBM Z and IBM LinuxONE | Chapter 1. Preparing to install on IBM Z and IBM LinuxONE 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 1.2. Choosing a method to install OpenShift Container Platform on IBM Z or IBM LinuxONE The OpenShift Container Platform installation program offers the following methods for deploying a cluster on IBM Z(R): Interactive : You can deploy a cluster with the web-based Assisted Installer . This method requires no setup for the installer, and is ideal for connected environments like IBM Z(R). Local Agent-based : You can deploy a cluster locally with the Agent-based Installer . It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command line interface (CLI). This approach is ideal for disconnected networks. Full control : You can deploy a cluster on infrastructure that you prepare and maintain , which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Table 1.1. IBM Z(R) installation options Assisted Installer Agent-based Installer User-provisioned installation Installer-provisioned installation IBM Z(R) with z/VM [✓] [✓] [✓] Restricted network IBM Z(R) with z/VM [✓] [✓] IBM Z(R) with RHEL KVM [✓] [✓] [✓] Restricted network IBM Z(R) with RHEL KVM [✓] [✓] IBM Z(R) in an LPAR [✓] Restricted network IBM Z(R) in an LPAR [✓] For more information about the installation process, see the Installation process . 1.2.1. User-provisioned infrastructure installation of OpenShift Container Platform on IBM Z User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the IBM Z(R) platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform with z/VM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with z/VM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform with KVM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with RHEL KVM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform in an LPAR on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_z_and_ibm_linuxone/preparing-to-install-on-ibm-z |
18.6. Malicious Software and Spoofed IP Addresses | 18.6. Malicious Software and Spoofed IP Addresses More elaborate rules can be created that control access to specific subnets, or even specific nodes, within a LAN. You can also restrict certain dubious applications or programs such as trojans, worms, and other client/server viruses from contacting their server. For example, some trojans scan networks for services on ports from 31337 to 31340 (called the elite ports in cracking terminology). Since there are no legitimate services that communicate via these non-standard ports, blocking them can effectively diminish the chances that potentially infected nodes on your network independently communicate with their remote master servers. The following rules drop all TCP traffic that attempts to use port 31337: You can also block outside connections that attempt to spoof private IP address ranges to infiltrate your LAN. For example, if your LAN uses the 192.168.1.0/24 range, you can design a rule that instructs the Internet-facing network device (for example, eth0) to drop any packets to that device with an address in your LAN IP range. Because it is recommended to reject forwarded packets as a default policy, any other spoofed IP address to the external-facing device (eth0) is rejected automatically. Note There is a distinction between the DROP and REJECT targets when dealing with appended rules. The REJECT target denies access and returns a connection refused error to users who attempt to connect to the service. The DROP target, as the name implies, drops the packet without any warning. Administrators can use their own discretion when using these targets. However, to avoid user confusion and attempts to continue connecting, the REJECT target is recommended. | [
"iptables -A OUTPUT -o eth0 -p tcp --dport 31337 --sport 31337 -j DROP iptables -A FORWARD -o eth0 -p tcp --dport 31337 --sport 31337 -j DROP",
"iptables -A FORWARD -s 192.168.1.0/24 -i eth0 -j DROP"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-firewall-ipt-rule |
Chapter 2. About Amazon EC2 | Chapter 2. About Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2), a service operated by amazon.com, provides customers with a customizable virtual computing environment. With this service, an Amazon Machine Image (AMI) can be booted to create a virtual machine or instance. Users can install the software they require on an instance and are charged according to the capacity used. Amazon EC2 is designed to be flexible and allows users to quickly scale their deployed applications. See the Amazon Web Services website for more information. About Amazon Machine Images An Amazon Machine Image (AMI) is a template for an EC2 virtual machine instance. Users create EC2 instances by selecting an appropriate AMI to create the instance from. The primary component of an AMI is a read-only filesystem that contains an installed operating system as well as other software. Each AMI has different software installed for different use cases. Amazon EC2 includes many AMIs that both Amazon Web Services and third parties provide. Users can also create their own custom AMIs. 2.1. Types of JBoss EAP Amazon Machine Images Use JBoss EAP on Amazon Elastic Compute Cloud (Amazon EC2) by deploying a public or private Amazon Machine Image (AMI). Important Red Hat does not currently provide support for the full-ha profile, in either standalone instances or a managed domain. JBoss EAP public AMI Access JBoss EAP public AMIs through the AWS marketplace . The public AMIs are offered with the pay-as-you-go (PAYG) model. With a PAYG model, you only pay based on the number of computing resources you used. JBoss EAP private AMI You can use your existing subscription to access JBoss EAP private AMIs through Red Hat Cloud Access. For information about Red Hat Cloud Access, see About Red Hat Cloud Access . 2.2. Red Hat Cloud Access features Membership in the Red Hat Cloud Access program provides access to supported private Amazon Machine Images (AMIs) created by Red Hat. The Red Hat AMIs have the following software pre-installed and fully supported by Red Hat: Red Hat Enterprise Linux JBoss EAP Product updates with RPMs using Red Hat Update Infrastructure Each of the Red Hat AMIs is only a starting point, requiring further configuration to the requirements of your application. 2.3. Supported Amazon EC2 instance types Red Hat Cloud Access supports the following Amazon EC2 instance types. See Amazon Elastic Compute Cloud User Guide for Linux Instances for more information about each instance. The minimum virtual hardware requirements for an AMI to deploy JBoss EAP are the following: Virtual CPU: 2 Memory: 4 GB However, depending on the applications you deploy on JBoss EAP you might require additional processors and memory. 2.4. Supported Red Hat AMIs The supported Red Hat AMIs can be identified by their names, as shown in the following examples: Private image example Public image example RHEL-x is the version number of Red Hat Enterprise Linux installed in the AMI. Example 9 . JBEAP-x.y.z is the version number of JBoss EAP installed in the AMI. Example 8.0.0 . 20240804 is the date that the AMI was created in the format of YYYYMMDD. x86_64 is the architecture of the AMI. This can be x86_64 or i386 . Access2 or Marketplace denote whether the AMI is private or public as follows: Private image contains Access2 . Public image contains Marketplace . | [
"RHEL-9-JBEAP-8.0.0_HVM_GA-20240909-x86_64-0-Access2-GP2",
"RHEL-9-JBEAP-8.0.0_HVM_GA-20240804-x86_64-0-Marketplace-GP2"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/deploying_jboss_eap_on_amazon_web_services/assembly-about-amazon-ec2_default |
function::ansi_set_color | function::ansi_set_color Name function::ansi_set_color - Set the ansi Select Graphic Rendition mode. Synopsis Arguments fg Foreground color to set. General Syntax ansi_set_color(fh:long) Description Sends ansi code for Select Graphic Rendition mode for the given forground color. Black (30), Blue (34), Green (32), Cyan (36), Red (31), Purple (35), Brown (33), Light Gray (37). | [
"function ansi_set_color(fg:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ansi-set-color |
Chapter 6. Understanding identity provider configuration | Chapter 6. Understanding identity provider configuration The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 6.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 6.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . Once an identity provider has been defined, you can use RBAC to define and apply permissions . 6.3. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 6.4. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 6.5. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 6.6. Manually provisioning a user when using the lookup mapping method Typically, identities are automatically mapped to users during login. The lookup mapping method disables this automatic mapping, which requires you to provision users manually. If you are using the lookup mapping method, use the following procedure for each user after configuring the identity provider. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create an OpenShift Container Platform user: USD oc create user <username> Create an OpenShift Container Platform identity: USD oc create identity <identity_provider>:<identity_provider_user_id> Where <identity_provider_user_id> is a name that uniquely represents the user in the identity provider. Create a user identity mapping for the created user and identity: USD oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username> Additional resources How to create user, identity and map user and identity in LDAP authentication for mappingMethod as lookup inside the OAuth manifest How to create user, identity and map user and identity in OIDC authentication for mappingMethod as lookup | [
"oc delete secrets kubeadmin -n kube-system",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc create user <username>",
"oc create identity <identity_provider>:<identity_provider_user_id>",
"oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/understanding-identity-provider |
2.3. keepalived Scheduling Overview | 2.3. keepalived Scheduling Overview Using Keepalived provides a great deal of flexibility in distributing traffic across real servers, in part due to the variety of scheduling algorithms supported. Load balancing is superior to less flexible methods, such as Round-Robin DNS where the hierarchical nature of DNS and the caching by client machines can lead to load imbalances. Additionally, the low-level filtering employed by the LVS router has advantages over application-level request forwarding because balancing loads at the network packet level causes minimal computational overhead and allows for greater scalability. Using assigned weights gives arbitrary priorities to individual machines. Using this form of scheduling, it is possible to create a group of real servers using a variety of hardware and software combinations and the active router can evenly load each real server. The scheduling mechanism for Keepalived is provided by a collection of kernel patches called IP Virtual Server or IPVS modules. These modules enable layer 4 ( L4 ) transport layer switching, which is designed to work well with multiple servers on a single IP address. To track and route packets to the real servers efficiently, IPVS builds an IPVS table in the kernel. This table is used by the active LVS router to redirect requests from a virtual server address to and returning from real servers in the pool. 2.3.1. Keepalived Scheduling Algorithms The structure that the IPVS table takes depends on the scheduling algorithm that the administrator chooses for any given virtual server. To allow for maximum flexibility in the types of services you can cluster and how these services are scheduled, Keepalived supports the following scheduling algorithms listed below. Round-Robin Scheduling Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. Load Balancer round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. Weighted Round-Robin Scheduling Distributes each request sequentially around the pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted upward or downward by dynamic load information. Weighted round-robin scheduling is a preferred choice if there are significant differences in the capacity of real servers in the pool. However, if the request load varies dramatically, the more heavily weighted server may answer more than its share of requests. Least-Connection Distributes more requests to real servers with fewer active connections. Because it keeps track of live connections to the real servers through the IPVS table, least-connection is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each member node has roughly the same capacity. If a group of servers have different capabilities, weighted least-connection scheduling is a better choice. Weighted Least-Connections Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted upward or downward by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity. Locality-Based Least-Connection Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server. Locality-Based Least-Connection Scheduling with Replication Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most loaded node is then dropped from the real server subset to prevent over-replication. Destination Hash Scheduling Distributes requests to the pool of real servers by looking up the destination IP in a static hash table. This algorithm is designed for use in a proxy-cache server cluster. Source Hash Scheduling Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is designed for LVS routers with multiple firewalls. Shortest Expected Delay Distributes connection requests to the server that has the shortest delay expected based on number of connections on a given server divided by its assigned weight. Never Queue A two-pronged scheduler that first finds and sends connection requests to a server that is idling, or has no connections. If there are no idling servers, the scheduler defaults to the server that has the least delay in the same manner as Shortest Expected Delay . 2.3.2. Server Weight and Scheduling The administrator of Load Balancer can assign a weight to each node in the real server pool. This weight is an integer value which is factored into any weight-aware scheduling algorithms (such as weighted least-connections) and helps the LVS router more evenly load hardware with different capabilities. Weights work as a ratio relative to one another. For instance, if one real server has a weight of 1 and the other server has a weight of 5, then the server with a weight of 5 gets 5 connections for every 1 connection the other server gets. The default value for a real server weight is 1. Although adding weight to varying hardware configurations in a real server pool can help load-balance the cluster more efficiently, it can cause temporary imbalances when a real server is introduced to the real server pool and the virtual server is scheduled using weighted least-connections. For example, suppose there are three servers in the real server pool. Servers A and B are weighted at 1 and the third, server C, is weighted at 2. If server C goes down for any reason, servers A and B evenly distributes the abandoned load. However, once server C comes back online, the LVS router sees it has zero connections and floods the server with all incoming requests until it is on par with servers A and B. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-scheduling-vsa |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide_with_fedramp/proc-providing-feedback-on-redhat-documentation |
Chapter 47. Closing cases | Chapter 47. Closing cases A case instance can be completed when there are no more activities to be performed and the business goal is achieved, or it can be closed prematurely. Usually the case owner closes the case when all work is completed and the case goals have been met. When you close a case, consider adding a comment about why the case instance is being closed. A closed case can be reopened later with the same case ID if required. When a case is reopened, stages that were active when the case was closed will be active when the case is reopened. You can close case instances remotely using KIE Server REST API requests or directly in the Showcase application. 47.1. Closing a case using the KIE Server REST API You can use a REST API request to close a case instance. Red Hat Process Automation Manager includes the Swagger client, which includes endpoints and documentation for REST API requests. Alternatively, you can use the same endpoints to make API calls using your preferred client or Curl. Prerequisites A case instance has been started using Showcase. You are able to authenticate API requests as a user with the admin role. Procedure Open the Swagger REST API client in a web browser: http://localhost:8080/kie-server/docs Under Case Instances :: Case Management , open the POST request with the following endpoint: /server/containers/{id}/cases/instances/{caseId} Click Try it out and fill in the required parameters: Table 47.1. Parameters Name Description id itorders caseId IT-0000000001 Optional: Include a comment to be included in the case file. To leave a comment, type it into the body text field as a String . Click Execute to close the case. To confirm the case is closed, open the Showcase application and change the case list status to Closed . 47.2. Closing a case in the Showcase application A case instance is complete when no more activities need to be performed and the business goal has been achieved. After a case is complete, you can close the case to indicate that the case is complete and that no further work is required. When you close a case, consider adding a specific comment about why you are closing the case. If needed, you can reopen the case later with the same case ID. You can use the Showcase application to close a case instance at any time. From Showcase, you can easily view the details of the case or leave a comment before closing it. Prerequisites You are logged in to the Showcase application and are the owner or administrator for a case instance that you want to close. Procedure In the Showcase application, locate the case instance you want to close from the list of case instances. To close the case without viewing the details first, click Close . To close the case from the case details page, click the case in the list to open it. From the case overview page you can add comments to the case and verify that you are closing the correct case based on the case information. Click Close to close the case. Click Back to Case List in the upper-left corner of the page to return to the Showcase case list view. Click the drop-down list to Status and select Canceled to view the list of closed and canceled cases. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/case-management-closing-cases-ref |
Chapter 1. Overview of HawtIO | Chapter 1. Overview of HawtIO HawtIO is a diagnostic Console for the Red Hat build of Apache Camel and Red Hat build of AMQ. It is a pluggable Web diagnostic console built with modern Web technologies such as React and PatternFly . HawtIO provides a central interface to examine and manage the details of one or more deployed HawtIO-enabled containers. HawtIO is available when you install HawtIO standalone or use HawtIO on OpenShift. The integrations that you can view and manage in HawtIO depend on the plugins that are running. You can monitor HawtIO and system resources, perform updates, and start or stop services. The pluggable architecture is based on Webpack Module Federation and is highly extensible; you can dynamically extend HawtIO with your plugins or automatically discover plugins inside the JVM. HawtIO has built-in plugins already to make it highly useful out of the box for your JVM application. The plugins include Apache Camel, Connect, JMX, Logs, Runtime, Quartz, and Spring Boot. HawtIO is primarily designed to be used with Camel Quarkus and Camel Spring Boot. It's also a tool for managing microservice applications. HawtIO is cloud-native; it's ready to go over the cloud! You can deploy it to Kubernetes and OpenShift with the HawtIO Operator . Among the benefits of HawtIO are: Runtime management of JVM via JMX, especially that of Camel applications and AMQ broker, with specialized views Visualization and debugging/tracing of Camel routes Simple managing and monitoring of application metrics The following diagram depicts the architectural overview of HawtIO: HawtIO Standalone HawtIO On OpenShift | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/hawtio_diagnostic_console_guide/overview-of-hawtio |
Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation | Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads Deployments . Click Workloads Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/backing-openshift-container-platform-applications-with-openshift-data-foundation_osp |
Chapter 6. Working with Helm charts | Chapter 6. Working with Helm charts 6.1. Understanding Helm Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Container Platform resources. A running instance of the chart in a cluster is called a release . A new release is created every time a chart is installed on the cluster. Each time a chart is installed, or a release is upgraded or rolled back, an incremental revision is created. 6.1.1. Key features Helm provides the ability to: Search through a large collection of charts stored in the chart repository. Modify existing charts. Create your own charts with OpenShift Container Platform or Kubernetes resources. Package and share your applications as charts. 6.1.2. Red Hat Certification of Helm charts for OpenShift You can choose to verify and certify your Helm charts by Red Hat for all the components you will be deploying on the Red Hat OpenShift Container Platform. Charts go through an automated Red Hat OpenShift certification workflow that guarantees security compliance as well as best integration and experience with the platform. Certification assures the integrity of the chart and ensures that the Helm chart works seamlessly on Red Hat OpenShift clusters. 6.1.3. Additional resources For more information on how to certify your Helm charts as a Red Hat partner, see Red Hat Certification of Helm charts for OpenShift . For more information on OpenShift and Container certification guides for Red Hat partners, see Partner Guide for OpenShift and Container Certification . For a list of the charts, see the Red Hat Helm index file . You can view the available charts at the Red Hat Marketplace . For more information, see Using the Red Hat Marketplace . 6.2. Installing Helm The following section describes how to install Helm on different platforms using the CLI. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Prerequisites You have installed Go, version 1.13 or higher. 6.2.1. On Linux Download the Helm binary and add it to your path: Linux (x86_64, amd64) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm Linux on IBM Z and LinuxONE (s390x) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm Linux on IBM Power (ppc64le) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 6.2.2. On Windows 7/8 Download the latest .exe file and put in a directory of your preference. Right click Start and click Control Panel . Select System and Security and then click System . From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom. Select Path from the Variable section and click Edit . Click New and type the path to the folder with the .exe file into the field or click Browse and select the directory, and click OK . 6.2.3. On Windows 10 Download the latest .exe file and put in a directory of your preference. Click Search and type env or environment . Select Edit environment variables for your account . Select Path from the Variable section and click Edit . Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK . 6.2.4. On MacOS Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 6.3. Configuring custom Helm chart repositories You can install Helm charts on an OpenShift Container Platform cluster using the following methods: The CLI. The Developer perspective of the web console. The Developer Catalog , in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see the Red Hat Helm index file . As a cluster administrator, you can add multiple Helm chart repositories, apart from the default one, and display the Helm charts from these repositories in the Developer Catalog . 6.3.1. Installing a Helm chart on an OpenShift Container Platform cluster Prerequisites You have a running OpenShift Container Platform cluster and you have logged into it. You have installed Helm. Procedure Create a new project: USD oc new-project vault Add a repository of Helm charts to your local Helm client: USD helm repo add openshift-helm-charts https://charts.openshift.io/ Example output "openshift-helm-charts" has been added to your repositories Update the repository: USD helm repo update Install an example HashiCorp Vault: USD helm install example-vault openshift-helm-charts/hashicorp-vault Example output NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault! Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2 6.3.2. Installing Helm charts using the Developer perspective You can use either the Developer perspective in the web console or the CLI to select and install a chart from the Helm charts listed in the Developer Catalog . You can create Helm releases by installing Helm charts and see them in the Developer perspective of the web console. Prerequisites You have logged in to the web console and have switched to the Developer perspective . Procedure To create Helm releases from the Helm charts provided in the Developer Catalog : In the Developer perspective, navigate to the +Add view and select a project. Then click Helm Chart option to see all the Helm Charts in the Developer Catalog . Select a chart and read the description, README, and other details about the chart. Click Install Helm Chart . Figure 6.1. Helm charts in developer catalog In the Install Helm Chart page: Enter a unique name for the release in the Release Name field. Select the required chart version from the Chart Version drop-down list. Configure your Helm chart by using the Form View or the YAML View . Note Where available, you can switch between the YAML View and Form View . The data is persisted when switching between the views. Click Install to create a Helm release. You will be redirected to the Topology view where the release is displayed. If the Helm chart has release notes, the chart is pre-selected and the right panel displays the release notes for that release. You can upgrade, rollback, or uninstall a Helm release by using the Actions button on the side panel or by right-clicking a Helm release. 6.3.3. Using Helm in the web terminal You can use Helm by initializing the web terminal in the Developer perspective of the web console. For more information, see Using the web terminal . 6.3.4. Creating a custom Helm chart on OpenShift Container Platform Procedure Create a new project: USD oc new-project nodejs-ex-k Download an example Node.js chart that contains OpenShift Container Platform objects: USD git clone https://github.com/redhat-developer/redhat-helm-charts Go to the directory with the sample chart: USD cd redhat-helm-charts/alpha/nodejs-ex-k/ Edit the Chart.yaml file and add a description of your chart: apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5 1 The chart API version. It should be v2 for Helm charts that require at least Helm 3. 2 The name of your chart. 3 The description of your chart. 4 The URL to an image to be used as an icon. 5 The Version of your chart as per the Semantic Versioning (SemVer) 2.0.0 Specification. Verify that the chart is formatted properly: USD helm lint Example output [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed Navigate to the directory level: USD cd .. Install the chart: USD helm install nodejs-chart nodejs-ex-k Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0 6.3.5. Adding custom Helm chart repositories As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster. Sample Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository, run: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 6.2. Chart repositories filter Note If a cluster administrator removes all of the chart repositories, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel. 6.3.6. Creating credentials and CA certificates to add Helm chart repositories Some Helm chart repositories need credentials and custom certificate authority (CA) certificates to connect to it. You can use the web console as well as the CLI to add credentials and certificates. Procedure To configure the credentials and certificates, and then add a Helm chart repository using the CLI: In the openshift-config namespace, create a ConfigMap object with a custom CA certificate in PEM encoded format, and store it under the ca-bundle.crt key within the config map: USD oc create configmap helm-ca-cert \ --from-file=ca-bundle.crt=/path/to/certs/ca.crt \ -n openshift-config In the openshift-config namespace, create a Secret object to add the client TLS configurations: USD oc create secret tls helm-tls-configs \ --cert=/path/to/certs/client.crt \ --key=/path/to/certs/client.key \ -n openshift-config Note that the client certificate and key must be in PEM encoded format and stored under the keys tls.crt and tls.key , respectively. Add the Helm repository as follows: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF The ConfigMap and Secret are consumed in the HelmChartRepository CR using the tlsConfig and ca fields. These certificates are used to connect to the Helm repository URL. By default, all authenticated users have access to all configured charts. However, for chart repositories where certificates are needed, you must provide users with read access to the helm-ca-cert config map and helm-tls-configs secret in the openshift-config namespace, as follows: USD cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["helm-ca-cert"] verbs: ["get"] - apiGroups: [""] resources: ["secrets"] resourceNames: ["helm-tls-configs"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF 6.3.7. Filtering Helm Charts by their certification level You can filter Helm charts based on their certification level in the Developer Catalog . Procedure In the Developer perspective , navigate to the +Add view and select a project. From the Developer Catalog tile, select the Helm Chart option to see all the Helm charts in the Developer Catalog . Use the filters to the left of the list of Helm charts to filter the required charts: Use the Chart Repositories filter to filter charts provided by Red Hat Certification Charts or OpenShift Helm Charts . Use the Source filter to filter charts sourced from Partners , Community , or Red Hat . Certified charts are indicated with the ( ) icon. Note The Source filter will not be visible when there is only one provider type. You can now select the required chart and install it. 6.3.8. Disabling Helm Chart repositories You can disable Helm Charts from a particular Helm Chart Repository in the catalog by setting the disabled property in the HelmChartRepository custom resource to true . Procedure To disable a Helm Chart repository by using CLI, add the disabled: true flag to the custom resource. For example, to remove an Azure sample chart repository, run: To disable a recently added Helm Chart repository by using Web Console: Go to Custom Resource Definitions and search for the HelmChartRepository custom resource. Go to Instances , find the repository you want to disable, and click its name. Go to the YAML tab, add the disabled: true flag in the spec section, and click Save . Example The repository is now disabled and will not appear in the catalog. 6.4. Working with Helm releases You can use the Developer perspective in the web console to update, rollback, or uninstall a Helm release. 6.4.1. Prerequisites You have logged in to the web console and have switched to the Developer perspective . 6.4.2. Upgrading a Helm release You can upgrade a Helm release to upgrade to a new chart version or update your release configuration. Procedure In the Topology view, select the Helm release to see the side panel. Click Actions Upgrade Helm Release . In the Upgrade Helm Release page, select the Chart Version you want to upgrade to, and then click Upgrade to create another Helm release. The Helm Releases page displays the two revisions. 6.4.3. Rolling back a Helm release If a release fails, you can rollback the Helm release to a version. Procedure To rollback a release using the Helm view: In the Developer perspective, navigate to the Helm view to see the Helm Releases in the namespace. Click the Options menu adjoining the listed release, and select Rollback . In the Rollback Helm Release page, select the Revision you want to rollback to and click Rollback . In the Helm Releases page, click on the chart to see the details and resources for that release. Go to the Revision History tab to see all the revisions for the chart. Figure 6.3. Helm revision history If required, you can further use the Options menu adjoining a particular revision and select the revision to rollback to. 6.4.4. Uninstalling a Helm release Procedure In the Topology view, right-click the Helm release and select Uninstall Helm Release . In the confirmation prompt, enter the name of the chart and click Uninstall . | [
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/working-with-helm-charts |
Chapter 18. Managing asset metadata and version history | Chapter 18. Managing asset metadata and version history Most assets within Business Central have metadata and version information associated with them to help you identify and organize them within your projects. You can manage asset metadata and version history from the asset designer in Business Central. Procedure In Business Central, go to Menu Design Projects and click the project name. Select the asset from the list to open the asset designer. In the asset designer window, select Overview . If an asset doesn't have an Overview tab, then no metadata is associated with that asset. Select the Version History or Metadata tab to edit and update version and metadata details. Note Another way to update the working version of an asset is by clicking Latest Version in the top-right corner of the asset designer. Figure 18.1. Latest version of an asset Click Save to save changes. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assets_metadata_managing_proc |
Chapter 20. Network policy | Chapter 20. Network policy 20.1. About network policy As a developer, you can define network policies that restrict traffic to pods in your cluster. 20.1.1. About network policy In a cluster using a network plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.13, OpenShift SDN supports using network policy in its default network isolation mode. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: Important To allow ingress connections from hostNetwork pods in the same namespace, you need to apply the allow-from-hostnetwork policy together with the allow-same-namespace policy. To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 20.1.1.1. Using the allow-from-router network policy Use the following NetworkPolicy to allow external traffic regardless of the router configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" 1 podSelector: {} policyTypes: - Ingress 1 policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes. 20.1.1.2. Using the allow-from-hostnetwork network policy Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: "" podSelector: {} policyTypes: - Ingress 20.1.2. Optimizations for network policy with OpenShift SDN Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. It is inefficient to apply NetworkPolicy objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a podSelector . For example, if the spec podSelector and the ingress podSelector within a NetworkPolicy object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. When designing your network policy, refer to the following guidelines: Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. NetworkPolicy objects that select a whole namespace, by using the namespaceSelector or an empty podSelector , generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. 20.1.3. Optimizations for network policy with OVN-Kubernetes network plugin When designing your network policy, refer to the following guidelines: For network policies with the same spec.podSelector spec, it is more efficient to use one network policy with multiple ingress or egress rules, than multiple network policies with subsets of ingress or egress rules. Every ingress or egress rule based on the podSelector or namespaceSelector spec generates the number of OVS flows proportional to number of pods selected by network policy + number of pods selected by ingress or egress rule . Therefore, it is preferable to use the podSelector or namespaceSelector spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod. For example, the following policy contains two rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend The following policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]} The same guideline applies to the spec.podSelector spec. If you have the same ingress or egress rules for different network policies, it might be more efficient to create one network policy with a common spec.podSelector spec. For example, the following two policies have different rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend The following network policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically. 20.1.4. steps Creating a network policy Optional: Defining a default network policy 20.1.5. Additional resources Projects and namespaces Configuring multitenant network policy NetworkPolicy API 20.2. Creating a network policy As a user with the admin role, you can create a network policy for a namespace. 20.2.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 20.2.2. Creating a network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 20.2.3. Creating a default deny all network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3 1 namespace: default deploys this policy to the default namespace. 2 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 3 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created 20.2.4. Creating a network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output networkpolicy.networking.k8s.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 20.2.5. Creating a network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output networkpolicy.networking.k8s.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 20.2.6. Creating a network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output networkpolicy.networking.k8s.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 20.2.7. Additional resources Accessing the web console Logging for egress firewall and network policy rules 20.3. Viewing a network policy As a user with the admin role, you can view a network policy for a namespace. 20.3.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 20.3.2. Viewing network policies using the CLI You can examine the network policies in a namespace. Note If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command: USD oc get networkpolicy Optional: To examine a specific network policy, enter the following command: USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example: USD oc describe networkpolicy allow-same-namespace Output for oc describe command Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 20.4. Editing a network policy As a user with the admin role, you can edit an existing network policy for a namespace. 20.4.1. Editing a network policy You can edit a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure Optional: To list the network policy objects in a namespace, enter the following command: USD oc get networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the network policy object. If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the network policy object directly, enter the following command: USD oc edit networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the network policy object is updated. USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 20.4.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 20.4.3. Additional resources Creating a network policy 20.5. Deleting a network policy As a user with the admin role, you can delete a network policy from a namespace. 20.5.1. Deleting a network policy using the CLI You can delete a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure To delete a network policy object, enter the following command: USD oc delete networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 20.6. Defining a default network policy for projects As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one. 20.6.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 20.6.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 20.7. Configuring multitenant isolation with network policy As a cluster administrator, you can configure your network policies to provide multitenant network isolation. Note If you are using the OpenShift SDN network plugin, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set. 20.7.1. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 20.7.2. steps Defining a default network policy 20.7.3. Additional resources OpenShift SDN network isolation modes | [
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"touch <policy_name>.yaml",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"oc apply -f <policy_name>.yaml -n <namespace>",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"networkpolicy.networking.k8s.io/web-allow-external created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"networkpolicy.networking.k8s.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"networkpolicy.networking.k8s.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc get networkpolicy",
"oc describe networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy allow-same-namespace",
"Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress",
"oc get networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy <policy_name> -n <namespace>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc delete networkpolicy <policy_name> -n <namespace>",
"networkpolicy.networking.k8s.io/default-deny deleted",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF",
"oc describe networkpolicy",
"Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/network-policy |
Chapter 33. Servers and Services | Chapter 33. Servers and Services The named service now binds to all interfaces With this update, BIND is able to react to situations when a new IP address is added to an interface. If the new address is allowed by the configuration, BIND will automatically start to listen on that interface. (BZ# 1294506 ) Fix for tomcat-digest to generate password hashes When using the tomcat-digest utility to create an SHA hash of Tomcat passwords, the command terminated unexpectedly with the ClassNotFoundException Java exception. A patch has been provided to fix this bug and tomcat-digest now generates password hashes as expected. (BZ# 1240279 ) Tomcat can now use shell expansion in configuration files within the new conf.d directory Previously, the /etc/sysconfig/tomcat and /etc/tomcat/tomcat.conf files were loaded without shell expansion, causing the application to terminate unexpectedly. This update provides a mechanism for using shell expansion in the Tomcat configuration files by adding a new configuration directory, /etc/tomcat/conf.d . Any files placed in the new directory may now include shell variables. (BZ# 1221896 ) Fix for the tomcat-jsvc service unit to create two independent Tomcat servers When trying to start multiple independent Tomcat servers, the second server failed to start due to the jsvc service returning an error. This update fixes the jsvc systemd service unit as well as the handling of the TOMCAT_USER variable. (BZ# 1201409 ) The dbus-daemon service no longer becomes unresponsive due to leaking file descriptors Previously, the dbus-daemon service incorrectly handled multiple messages containing file descriptors if they were received in a short time period. As a consequence, dbus-daemon leaked file descriptors and became unresponsive. A patch has been applied to correctly handle multiple file descriptors from different messages inside dbus-daemon . As a result, dbus-daemon closes and passes file descriptors correctly and no longer becomes unresponsive in the described situation. (BZ# 1325870 ) Update for marking tomcat-admin-webapps package configration files Previously, the tomcat-admin-webapps web.xml files were not marked as the configuration files. Consequently, upgrading the tomcat-admin-webapps package overwrote the /usr/share/tomcat/webapps/host-manager/WEB-INF/web.xml and /usr/share/tomcat/webapps/manager/WEB-INF/web.xml files, causing custom user configuration to be automatically removed. This update fixes classification of these files, thus preventing this problem. (BZ# 1208402 ) Ghostcript no longer hangs when converting a PDF file to PNG Previously, when converting a PDF file into a PNG file, Ghostscript could become unresponsive. This bug has been fixed, and the conversion time is now proportional to the size of the PDF file being converted. (BZ# 1302121 ) The named-chroot service now starts correctly Due to a regression, the -t /var/named/chroot option was omitted in the named-chroot.service file. As a consequence, if the /etc/named.conf file was missing, the named-chroot service failed to start. Additionally, if different named.conf files existed in the /etc/ and /var/named/chroot/etc/ directories, the named-checkconf utility incorrectly checked the one in the changed-root directory when the service was started. With this update, the option in the service file has been added and the named-chroot service now works correctly. (BZ# 1278082 ) AT-SPI2 driver added to brltty The Assistive Technology Service Provider Interface driver version 2 (AT-SPI2) has been added to the brltty daemon. AT-SPI2 enables using brltty with, for example, the GNOME Accessibility Toolkit. (BZ# 1324672 ) A new --ignore-missing option for tuned-adm verify The --ignore-missing command-line option has been added to the tuned-adm verify command. This command verifies whether a Tuned profile has been successfully applied, and displays differences between the requested Tuned profile and the current system settings. The --ignore-missing parameter causes tuned-adm verify to silently skip features that are not supported on the system, thus preventing the described errors. (BZ# 1243807 ) The new modules Tuned plug-in The modules plug-in allows Tuned to load and reload kernel modules with parameters specified in the the settings of the Tuned profiles. (BZ# 1249618 ) The number of inotify user watches increased to 65536 To allow for more pods on an Red Hat Enterprise Linux Atomic host, the number of inotify user watches has been increased by a factor of 8 to 65536. (BZ# 1322001 ) Timer migration for realtime Tuned profile has been disabled Previously, the realtime Tuned profile that is included in the tuned-profiles-realtime package set the value of the kernel.timer_migration variable to 1. As a consequence, realtime applications could be negatively affected. This update disables the timer migration in the realtime profile. (BZ# 1323283 ) rcu-nocbs no longer missing from kernel boot parameters Previously, the rcu_nocbs kernel parameter was not set in the realtime-virtual-host and realtime-virtual-guest tuned profiles. With this update, rcu-nocbs is set as expected. (BZ# 1334479 ) The global limit on how much time realtime scheduling may use has been removed in realtime Tuned profile Prior to this update, the Tuned utility configuration for the kernel.sched_rt_runtime_us sysctl variable in the realtime profile included in the tuned-profiles-realtime package was incorrect. As a consequence, creating a virtual machine instance caused an error due to incompatible scheduling time. Now, the value of kernel.sched_rt_runtime_us is set to -1 (no limit), and the described problem no longer occurs. (BZ# 1346715 ) sapconf now detects the NTP configuration properly Previously, the sapconf utility did not check whether the host system was configured to use the Network Time Protocol (NTP). As a consequence, even when NTP was configured, sapconf displayed the following error: With this update, sapconf properly checks for the NTP configuration, and the described problem no longer occurs. (BZ# 1228550 ) sapconf lists default packages correctly Prior to this update, the sapconf utility passed an incorrect parameter to the repoquery utility, which caused sapconf not to list the default packages in package groups. The bug has been fixed, and sapconf now lists default packages as expected. (BZ#1235608) The logrotate utility now saves status to the /var/lib/logrotate/ directory Previously, the logrotate utility saved status to the /var/lib/logrotate.status file. Consequently, logrotate did not work on systems where /var/lib was a read-only file system. With this update, the status file has been moved to the new /var/lib/logrotate/ directory, which can be mounted with write permissions. As a result, logrotate now works on systems where /var/lib is a read-only file system. (BZ# 1272236 ) Support for printing to an SMB printer using Kerberos using cups With this update, the cups package creates the symbolic link /usr/lib/cups/backend/smb referring to the /usr/libexec/samba/cups_backend_smb file. The symbolic link is used by the smb_krb5_wrapper utility to print to an server message block (SMB)-shared printer using Kerberos authentication. (BZ#1302055) Newly installed tomcat package has a correct shell pointing to /sbin/nologin Previously, the postinstall script set the Tomcat shell to /bin/nologin , which does not exist. Consequently, users failed to get a helpful message about the login access denial when attempting to log in as Tomcat user. This bug has been fixed, and the postinstall script now corectly sets the Tomcat shell to /sbin/nologin . (BZ# 1277197 ) | [
"3: NTP Service should be configured and started"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/bug_fixes_servers_and_services |
Chapter 22. AWS CloudWatch Component | Chapter 22. AWS CloudWatch Component Available as of Camel version 2.11 The CW component allows messages to be sent to an Amazon CloudWatch metrics. The implementation of the Amazon API is provided by the AWS SDK . Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon CloudWatch. More information are available at Amazon CloudWatch . 22.1. URI Format aws-cw://namespace[?options] The metrics will be created if they don't already exists. You can append query options to the URI in the following format, ?options=value&option2=value&... 22.2. URI Options The AWS CloudWatch component supports 5 options, which are listed below. Name Description Default Type configuration (advanced) The AWS CW default configuration CwConfiguration accessKey (producer) Amazon AWS Access Key String secretKey (producer) Amazon AWS Secret Key String region (producer) The region in which CW client needs to work String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The AWS CloudWatch endpoint is configured using URI syntax: with the following path and query parameters: 22.2.1. Path Parameters (1 parameters): Name Description Default Type namespace Required The metric namespace String 22.2.2. Query Parameters (11 parameters): Name Description Default Type amazonCwClient (producer) To use the AmazonCloudWatch as the client AmazonCloudWatch name (producer) The metric name String proxyHost (producer) To define a proxy host when instantiating the CW client String proxyPort (producer) To define a proxy port when instantiating the CW client Integer region (producer) The region in which CW client needs to work String timestamp (producer) The metric timestamp Date unit (producer) The metric unit String value (producer) The metric value Double synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean accessKey (security) Amazon AWS Access Key String secretKey (security) Amazon AWS Secret Key String 22.3. Spring Boot Auto-Configuration The component supports 16 options, which are listed below. Name Description Default Type camel.component.aws-cw.access-key Amazon AWS Access Key String camel.component.aws-cw.configuration.access-key Amazon AWS Access Key String camel.component.aws-cw.configuration.amazon-cw-client To use the AmazonCloudWatch as the client AmazonCloudWatch camel.component.aws-cw.configuration.name The metric name String camel.component.aws-cw.configuration.namespace The metric namespace String camel.component.aws-cw.configuration.proxy-host To define a proxy host when instantiating the CW client String camel.component.aws-cw.configuration.proxy-port To define a proxy port when instantiating the CW client Integer camel.component.aws-cw.configuration.region The region in which CW client needs to work String camel.component.aws-cw.configuration.secret-key Amazon AWS Secret Key String camel.component.aws-cw.configuration.timestamp The metric timestamp Date camel.component.aws-cw.configuration.unit The metric unit String camel.component.aws-cw.configuration.value The metric value Double camel.component.aws-cw.enabled Enable aws-cw component true Boolean camel.component.aws-cw.region The region in which CW client needs to work String camel.component.aws-cw.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.aws-cw.secret-key Amazon AWS Secret Key String Required CW component options You have to provide the amazonCwClient in the Registry or your accessKey and secretKey to access the Amazon's CloudWatch . 22.4. Usage 22.4.1. Message headers evaluated by the CW producer Header Type Description CamelAwsCwMetricName String The Amazon CW metric name. CamelAwsCwMetricValue Double The Amazon CW metric value. CamelAwsCwMetricUnit String The Amazon CW metric unit. CamelAwsCwMetricNamespace String The Amazon CW metric namespace. CamelAwsCwMetricTimestamp Date The Amazon CW metric timestamp. CamelAwsCwMetricDimensionName String Camel 2.12: The Amazon CW metric dimension name. CamelAwsCwMetricDimensionValue String Camel 2.12: The Amazon CW metric dimension value. CamelAwsCwMetricDimensions Map<String, String> Camel 2.12: A map of dimension names and dimension values. 22.4.2. Advanced AmazonCloudWatch configuration If you need more control over the AmazonCloudWatch instance configuration you can create your own instance and refer to it from the URI: from("direct:start") .to("aws-cw://namepsace?amazonCwClient=#client"); The #client refers to a AmazonCloudWatch in the Registry. For example if your Camel Application is running behind a firewall: AWSCredentials awsCredentials = new BasicAWSCredentials("myAccessKey", "mySecretKey"); ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost("http://myProxyHost"); clientConfiguration.setProxyPort(8080); AmazonCloudWatch client = new AmazonCloudWatchClient(awsCredentials, clientConfiguration); registry.bind("client", client); 22.5. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.10 or higher). 22.6. See Also Configuring Camel Component Endpoint Getting Started AWS Component | [
"aws-cw://namespace[?options]",
"aws-cw:namespace",
"from(\"direct:start\") .to(\"aws-cw://namepsace?amazonCwClient=#client\");",
"AWSCredentials awsCredentials = new BasicAWSCredentials(\"myAccessKey\", \"mySecretKey\"); ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost(\"http://myProxyHost\"); clientConfiguration.setProxyPort(8080); AmazonCloudWatch client = new AmazonCloudWatchClient(awsCredentials, clientConfiguration); registry.bind(\"client\", client);",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/aws-cw-component |
Chapter 48. Managing host groups using Ansible playbooks | Chapter 48. Managing host groups using Ansible playbooks To learn more about host groups in Identity Management (IdM) and using Ansible to perform operations involving host groups in Identity Management (IdM), see the following: Host groups in IdM Ensuring the presence of IdM host groups Ensuring the presence of hosts in IdM host groups Nesting IdM host groups Ensuring the presence of member managers in IdM host groups Ensuring the absence of hosts from IdM host groups Ensuring the absence of nested host groups from IdM host groups Ensuring the absence of member managers from IdM host groups 48.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 48.2. Ensuring the presence of IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are created in IdM using the ipa hostgroup-add command. The result of adding a host group to IdM is the state of the host group being present in IdM. Because of the Ansible reliance on idempotence, to add a host group to IdM using Ansible, you must create a playbook in which you define the state of the host group as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. For example, to ensure the presence of a host group named databases , specify name: databases in the - ipahostgroup task. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-present.yml file. In the playbook, state: present signifies a request to add the host group to IdM unless it already exists there. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose presence in IdM you wanted to ensure: The databases host group exists in IdM. 48.3. Ensuring the presence of hosts in IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of hosts in host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file have been added to IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host with the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This playbook adds the db.idm.example.com host to the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself. Instead, only an attempt is made to add db.idm.example.com to databases . Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about a host group to see which hosts are present in it: The db.idm.example.com host is present as a member of the databases host group. 48.4. Nesting IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of nested host groups in Identity Management (IdM) host groups using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To ensure that a nested host group A exists in a host group B : in the Ansible playbook, specify, among the - ipahostgroup variables, the name of the host group B using the name variable. Specify the name of the nested hostgroup A with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This Ansible playbook ensures the presence of the myqsl-server and oracle-server host groups in the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself to IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group in which nested host groups are present: The mysql-server and oracle-server host groups exist in the databases host group. 48.5. Ensuring the presence of member managers in IDM host groups using Ansible Playbooks The following procedure describes ensuring the presence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group contains example_member and project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. 48.6. Ensuring the absence of hosts from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of hosts from host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host and host group information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host whose absence from the host group you want to ensure using the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook ensures the absence of the db.idm.example.com host from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to remove the databases group itself. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group and the hosts it contains: The db.idm.example.com host does not exist in the databases host group. 48.7. Ensuring the absence of nested host groups from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of nested host groups from outer host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. Specify, among the - ipahostgroup variables, the name of the outer host group using the name variable. Specify the name of the nested hostgroup with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook makes sure that the mysql-server and oracle-server host groups are absent from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to ensure the databases group itself is deleted from IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group from which nested host groups should be absent: The output confirms that the mysql-server and oracle-server nested host groups are absent from the outer databases host group. 48.8. Ensuring the absence of IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are removed from IdM using the ipa hostgroup-del command. The result of removing a host group from IdM is the state of the host group being absent from IdM. Because of the Ansible reliance on idempotence, to remove a host group from IdM using Ansible, you must create a playbook in which you define the state of the host group as absent: state: absent . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-absent.yml file. This playbook ensures the absence of the databases host group from IdM. The state: absent means a request to delete the host group from IdM unless it is already deleted. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose absence you ensured: The databases host group does not exist in IdM. 48.9. Ensuring the absence of member managers from IdM host groups using Ansible playbooks The following procedure describes ensuring the absence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or user group you are removing as member managers and the name of the host group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group does not contain example_member or project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. | [
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases ipa: ERROR: databases: host group not found",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-host-groups-using-ansible-playbooks_configuring-and-managing-idm |
Chapter 5. Starting and Stopping Red Hat JBoss Data Virtualization | Chapter 5. Starting and Stopping Red Hat JBoss Data Virtualization 5.1. Starting Red Hat JBoss Data Virtualization To run Red Hat JBoss Data Virtualization, you must first start the JBoss EAP server. To start the JBoss EAP server, follow the instructions below for your operating system: For Red Hat Enterprise Linux, open a command line window and enter this command from your EAP_HOME directory (the directory in which EAP in installed): For Microsoft Windows, open a command line window and enter these commands: To ensure that the server has started correctly and to verify that there have been no errors, check the server log: EAP_HOME/standalone/log/server.log Note You can also verify the execution was error-free by opening the Management Console in a web browser and logging in using the username and password of a registered JBoss EAP Management User. The console's address is http://localhost:9990/console/ (For more information about using the Management Console see the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide .) Note For more advanced starting options, see the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide . 5.2. Stopping Red Hat JBoss Data Virtualization To stop Red Hat JBoss Data Virtualization, halt the JBoss EAP Server. Do this by pressing Ctrl+C in the terminal in which EAP is running. | [
"./bin/standalone.sh",
"chdir EAP_HOME/bin standalone.bat"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/getting_started_guide/starting_and_stopping_red_hat_jboss_data_virtualization |
Chapter 4. Configuring Red Hat OpenStack Platform for Federation | Chapter 4. Configuring Red Hat OpenStack Platform for Federation The following nodes require an assigned Fully-Qualified Domain Name (FQDN): The host running the Dashboard (horizon). The host running the Identity Service (keystone), referenced in this guide as USDFED_KEYSTONE_HOST . Note that more than one host will run a service in a high-availability environment, so the IP address is not a host address but rather the IP address bound to the service. The host running RH-SSO. The host running IdM. The Red Hat OpenStack Platform director deployment does not configure DNS or assign FQDNs to the nodes, however, the authentication protocols (and TLS) require the use of FQDNs. 4.1. Retrieving the IP address In Red Hat OpenStack Platform, there is one common public IP address for all OpenStack services, separated by port number. To determine the public IP address of the overcloud services, use the openstack endpoint list command: 4.2. Setting the host variables and naming the host You must determine the IP address and port to use. In this example, the IP address is 10.0.0.101 and the port is 13000. Confirm this value in overcloudrc: Assign the IP address a fully qualified domain name (FQDN), and write it to the /etc/hosts file. This example uses overcloud.localdomain: Note Although Red Hat OpenStack Platform director configures the hosts files on the overcloud nodes, you might need to add the host entry on any external hosts that participate. Set the USDFED_KEYSTONE_HOST and USDFED_KEYSTONE_HTTPS_PORT in the fed_variables file. This example uses the same values: Because Mellon runs on the Apache server that hosts Identity service (keystone), the Mellon host:port and keystone host:port values must match. Note If you run the hostname command on one of the Controller nodes, is output is similar to controller-0.localdomain . This is an internal cluster name, not its public name. Use the public IP address instead. 4.3. Installing helper files You must install the helper files as part of the configuration. Copy the configure-federation and fed_variables files that you created as part of Section 1.5, "Using a configuration script" into the stack home directory on undercloud-0 . 4.4. Setting your deployment variables The file fed_variables contains variables specific to your federation deployment. These variables are referenced in this guide as well as in the configure-federation helper script. Each site-specific federation variable is prefixed with FED_ . Ensure that every FED_ variable in fed_variables is provided a value. 4.5. Copying the helper files You must have the configuration file and variable files on controller-0 to continue. Copy the configure-federation and the edited fed_variables from the ~/stack home directory on undercloud-0 to the ~/heat-admin home directory on controller-0 : Note You can use the configure-federation script to perform the above step: USD ./configure-federation copy-helper-to-controller 4.6. Initializing the working environments On the undercloud node, as the stack user, create the fed_deployment directory. This location is the file stash: Note You can use the configure-federation script to perform the step: Use SSH to connect to controller-0 , and create the ~/fed_deployment directory as the head-admin user. This location is the file stash: Note You can use the configure-federation script to perform the step. From the controller-0 node: 4.7. Installing mod_auth_mellon You must install the mod_auth_mellon on each controller in your environment. On each controller, run the following: 4.8. Adding the RH-SSO FQDN to each Controller Ensure that every controller is reachable by its fully-qualified domain name (FQDN). The mellon service runs on each Controller node and connects to the RH-SSO IdP. If the FQDN of the RH-SSO IdP is not resolvable through DNS, manually add the FQDN to the /etc/hosts file on all controller nodes after the Heat Hosts section: 4.9. Installing and configuring Mellon on the Controller node The keycloak-httpd-client-install tool performs many of the steps needed to configure mod_auth_mellon and have it authenticate against the RH-SSO IdP. Run the keycloak-httpd-client-install tool on the node where mellon runs. In this example, mellon runs on the overcloud controllers protecting the Identity service (keystone). Note Red Hat OpenStack Platform is a high availability deployment with multiple overcloud Controller nodes, each running identical copies. As a result, you must replicate the mellon configuration on each Controller node. To do this, install and configure mellon on controller-0, and collect the configuration files that the keycloak-httpd-client-install tool created into a tar file. Use Object Storage (swift) to copy the archive to each Controller and unarchive the files there. Run the RH-SSO client installation: Note You can use the configure-federation script to perform the above step: USD ./configure-federation client-install After the client RPM installation, you should see output similar to this: 4.10. Editing the Mellon configuration During the IdP-assertion-to-Keystone mapping phase, your groups must be in a semicolon separated list. Use the following procedure to configure mellon so that when it receives multiple values for an attribute, it combines them into a semicolon-separated single value. Procedure Open the v3_mellon_keycloak_openstack.conf configuration file for editing: Add the MellonMergeEnvVars parameter to the <Location /v3> block: 4.11. Creating an archive of the generated configuration files To replicate the mellon configuration on all Controller nodes, create an archive of the files to install on each Controller node. Store the archive in the ~/fed_deployment subdirectory. Create the compressed archive: Note You can use the configure-federation script to perform the step: 4.12. Retrieving the Mellon configuration archive On the undercloud-0 node, retrieve the archive you created and extract the files so that you can access the data as needed in subsequent steps. Note You can use the configure-federation script to perform the above step: USD ./configure-federation fetch-sp-archive 4.13. Preventing Puppet from deleting unmanaged HTTPD files By default, the Puppet Apache module purges any files in Apache configuration directories that it does not manage. This prevents Apache from operating against the configuration that Puppet enforces. However, this conflicts with the manual configuration of mellon in the HTTPD configuration directories. The Apache Puppet apache::purge_configs flag is enabled by default, which directs Puppet to delete files that belong to the mod_auth_mellon RPM. Puppet also deletes the configuration files that keycloak-httpd-client-install generates. Until Puppet controls the mellon files, disable the apache::purge_configs flag. Note Disabling the apache::purge_configs flag opens the Controller nodes to vulnerabilities. Re-enable it when Puppet adds support managing mellon. To override the apache::purge_configs flag, create a Puppet file that contains the override, and add the override file to the list of Puppet files you use when you run the overcloud_deploy.sh script. Create the fed_deployment/puppet_override_apache.yaml environment file and add the following content: Add puppet_override_apache.yaml as the last environment file in the overcloud_deploy.sh script: Note You can use the configure-federation script to perform the above step: USD ./configure-federation puppet-override-apache 4.14. Configuring Identity service (keystone) for federation Keystone domains require extra configuration. However if the keystone Puppet module is enabled, it can perform this extra configuration step. In on of the Puppet YAML files, add the following: Set the following values in /etc/keystone/keystone.conf to enable federation. auth:methods A list of allowed authentication methods. By default the list is: ['external', 'password', 'token', 'oauth1'] . You must enable SAML by using the mapped method. Additionally, the external method must be excluded. Set the value to the following: password,token,oauth1,mapped . federation:trusted_dashboard A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of this list. You can use use this configuration option multiple times for different values. You must set this to use web-based SSO flows. For this deployment the value is: https://USDFED_KEYSTONE_HOST/dashboard/auth/websso/ The host is USDFED_KEYSTONE_HOST because Red Hat OpenStack Platform director co-locates both keystone and horizon on the same host. If horizon runs on a different host to keystone, you must adjust accordingly. federation:sso_callback_template The absolute path to an HTML file that is used as a Single Sign-On callback handler This page redirects the user from the Identity service back to a trusted dashboard host by form encoding a token in a POST request. The default value is sufficient for most deployments. federation:remote_id_attribute The value that is used to obtain the entity ID of the Identity provider. For mod_auth_mellon , use Mellon_IDP . Set this value in the mellon configuration file using the Mellon IDP directive. Create the fed_deployment/puppet_override_keystone.yaml file with the following content: Append the created environment file at the end of the overcloud_deploy.sh script. Note You can use the configure-federation script to perform the above step: USD ./configure-federation puppet-override-keystone 4.15. Deploying the Mellon configuration archive Use Object Storage (swift) artifacts to install the mellon configuration files on each Controller node. Note You can use the configure-federation script to perform the above step: `./configure-federation deploy-mellon-configuration ` 4.16. Redeploying the overcloud To apply the changes from the Puppet YAML configuration files and Object Storage artifacts, run the deploy command: Important: When you make additional changes to the Controller nodes by re-running Puppet, the overcloud_deploy.sh script might overwrite configurations. Do not apply the Puppet configuration after this procedure to avoid losing manual edits that you make to the configuration files on the overcloud Controller nodes. 4.17. Use proxy persistence for the Identity service (keystone) on each Controller When mod_auth_mellon establishes a session, it cannot share its state information across multiple servers. Because the high number of redirections used by SAML involves state information, the same server must process all transactions. Therefore, you must configure HAProxy to direct each client's requests to the same server each time. There are two way that HAProxy can bind a client to the same server: Affinity Use affinity when information from a layer below the application layer is used to pin a client request to a single server. Persistence Use persistence when the application layer information binds a client to a single server sticky session. Persistence is much more accurate than affinity. Use the following procedure to implement persistence. The HAProxy cookie directive names a cookie and its parameters for persistence. The HAProxy server directive has a cookie option that sets the value of the cookie to the name of the server. If an incoming request does not have a cookie identifying the back-end server, then HAProxy selects a server based on its configured balancing algorithm. Procedure To enable persistence in the keystone_public block of the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg configuration file, add the following line: This setting states that SERVERID is the name of the persistence cookie. Edit each server line and add cookie <server-name> as an additional option: 4.18. Creating federated resources Create the Identity service (keystone) targets, users, and groups for consumption by the identity provider (IdP). Procedure Source the overcloudrc file on the undercloud as the stack user, and run the following commands: Note You can use the configure-federation script to perform the above step: USD ./configure-federation create-federated-resources 4.19. Creating the Identity provider in Red Hat OpenStack Platform The IdP must be registered in the Identity service (keystone), which creates a binding between the entityID in the SAML assertion and the name of the IdP in the Identity service. Procedure Locate the entityID of the RH-SSO IdP, which is located in the IdP metadata. The IdP metadata is stored in the /var/lib/config-data/puppet-generated/keystone/etc/httpd/federation/v3_keycloak_USDFED_RHSSO_REALM_idp_metadata.xml file. You can also find the IdP metadata in the fed_deployment/var/lib/config-data/puppet-generated/keystone/etc/httpd/federation/v3_keycloak_USDFED_RHSSO_REALM_idp_metadata.xml file. Note the value of the entityID attribute, which is in the IdP metadata file within the <EntityDescriptor> element. Assign the USDFED_IDP_ENTITY_ID variable this value. Name your IdP rhsso , which is assigned to the variable USDFED_OPENSTACK_IDP_NAME : Note You can use the configure-federation script to perform the above step: USD ./configure-federation openstack-create-idp 4.20. Create the Mapping File and Upload to Keystone Keystone performs a mapping to match the IdP's SAML assertion into a format that keystone can understand. The mapping is performed by keystone's mapping engine and is based on a set of mapping rules that are bound to the IdP. These are the mapping rules used in this example (as described in the introduction): This mapping file contains only one rule. Rules are divided into two parts: local and remote . The mapping engine works by iterating over the list of rules until one matches, and then executing it. A rule is considered a match only if all the conditions in the remote part of the rule match. In this example the remote conditions specify: The assertion must contain a value called MELLON_NAME_ID . The assertion must contain a values called MELLON_groups and at least one of the groups in the group list must be openstack-users . If the rule matches, then: The keystone user name will be assigned the value from MELLON_NAME_ID . The user will be assigned to the keystone group federated_users in the federated_domain domain. In summary, if the IdP successfully authenticates the user, and the IdP asserts that user belongs to the group openstack-users , then keystone will allow that user to access OpenStack with the privileges bound to the federated_users group in keystone. 4.20.1. Create the mapping To create the mapping in keystone, create a file containing the mapping rules and then upload it into keystone, giving it a reference name. Create the mapping file in the fed_deployment directory (for example, in fed_deployment/mapping_USD{FED_OPENSTACK_IDP_NAME}_saml2.json ), and assign the name USDFED_OPENSTACK_MAPPING_NAME to the mapping rules. For example: Note You can use the configure-federation script to perform the above procedure as two steps: create-mapping - creates the mapping file. openstack-create-mapping - performs the upload of the file. 4.21. Create a Keystone Federation Protocol Keystone uses the Mapped protocol to bind an IdP to a mapping. To establish this binding: Note You can use the configure-federation script to perform the above step: USD ./configure-federation openstack-create-protocol 4.22. Fully-Qualify the Keystone Settings On each controller node, edit /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/10-keystone_wsgi_main.conf to confirm that the ServerName directive inside the VirtualHost block includes the HTTPS scheme, the public hostname, and the public port. You must also enable the UseCanonicalName directive. For example: Note Be sure to substitute the USDFED_ variables with the values specific to your deployment. 4.23. Configure Horizon to Use Federation On each controller node, edit /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings and make sure the following configuration values are set: Note Be sure to substitute the USDFED_ variables with the values specific to your deployment. 4.24. Configure Horizon to Use the X-Forwarded-Proto HTTP Header On each controller node, edit /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings and uncomment the line: Note You must restart a container for configuration changes to take effect. | [
"(overcloud) [stack@director ~]USD openstack endpoint list -c \"Service Name\" -c Interface -c URL | grep public | swift | public | http://10.0.0.101:8080/v1/AUTH_%(tenant_id)s | | panko | public | http://10.0.0.101:8977 | | nova | public | http://10.0.0.101:8774/v2.1 | | glance | public | http://10.0.0.101:9292 | | neutron | public | http://10.0.0.101:9696 | | keystone | public | http://10.0.0.101:5000 | | cinderv2 | public | http://10.0.0.101:8776/v2/%(tenant_id)s | | placement | public | http://10.0.0.101:8778/placement | | cinderv3 | public | http://10.0.0.101:8776/v3/%(tenant_id)s | | heat | public | http://10.0.0.101:8004/v1/%(tenant_id)s | | heat-cfn | public | http://10.0.0.101:8000/v1 | | gnocchi | public | http://10.0.0.101:8041 | | aodh | public | http://10.0.0.101:8042 | | cinderv3 | public | http://10.0.0.101:8776/v3/%(tenant_id)s |",
"export OS_AUTH_URL=https://10.0.0.101:13000/v2.0",
"10.0.0.101 overcloud.localdomain # FQDN of the external VIP",
"FED_KEYSTONE_HOST=\"overcloud.localdomain\" FED_KEYSTONE_HTTPS_PORT=13000",
"scp configure-federation fed_variables heat-admin@controller-0:/home/heat-admin",
"su - stack mkdir fed_deployment",
"./configure-federation initialize",
"ssh heat-admin@controller-0 mkdir fed_deployment",
"./configure-federation initialize",
"ssh heat-admin@controller-n # replace n with controller number sudo dnf install mod_auth_mellon",
"ssh heat-admin@controller-n sudo vi /etc/hosts Add this line (substituting the variables) before this line: HEAT_HOSTS_START - Do not edit manually within this section! HEAT_HOSTS_END USDFED_RHSSO_IP_ADDR USDFED_RHSSO_FQDN",
"ssh heat-admin@controller-0 USD dnf -y install keycloak-httpd-client-install USD sudo keycloak-httpd-client-install --client-originate-method registration --mellon-https-port USDFED_KEYSTONE_HTTPS_PORT --mellon-hostname USDFED_KEYSTONE_HOST --mellon-root /v3 --keycloak-server-url USDFED_RHSSO_URL --keycloak-admin-password USDFED_RHSSO_ADMIN_PASSWORD --app-name v3 --keycloak-realm USDFED_RHSSO_REALM -l \"/v3/auth/OS-FEDERATION/websso/mapped\" -l \"/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/mapped/websso\" -l \"/v3/OS-FEDERATION/identity_providers/rhsso/protocols/mapped/auth\"",
"[Step 1] Connect to Keycloak Server [Step 2] Create Directories [Step 3] Set up template environment [Step 4] Set up Service Provider X509 Certificates [Step 5] Build Mellon httpd config file [Step 6] Build Mellon SP metadata file [Step 7] Query realms from Keycloak server [Step 8] Create realm on Keycloak server [Step 9] Query realm clients from Keycloak server [Step 10] Get new initial access token [Step 11] Creating new client using registration service [Step 12] Enable saml.force.post.binding [Step 13] Add group attribute mapper to client [Step 14] Add Redirect URIs to client [Step 15] Retrieve IdP metadata from Keycloak server [Step 16] Completed Successfully",
"vi /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf",
"<Location /v3> MellonMergeEnvVars On \";\" </Location>",
"mkdir fed_deployment && cd fed_deployment tar -czvf rhsso_config.tar.gz --exclude '*.orig' --exclude '*~' /var/lib/config-data/puppet-generated/keystone/etc/httpd/federation /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf",
"./configure-federation create-sp-archive",
"scp heat-admin@controller-0:/home/heat-admin/fed_deployment/rhsso_config.tar.gz ~/fed_deployment tar -C fed_deployment -xvf fed_deployment/rhsso_config.tar.gz",
"parameter_defaults: ControllerExtraConfig: apache::purge_configs: false",
"-e /home/stack/fed_deployment/puppet_override_apache.yaml --log-file overcloud_deployment_14.log &> overcloud_install.log",
"keystone::using_domain_config: true",
"parameter_defaults: controllerExtraConfig: keystone::using_domain_config: true keystone::config::keystone_config: identity/domain_configurations_from_database: value: true auth/methods: value: external,password,token,oauth1,mapped federation/trusted_dashboard: value: https://USDFED_KEYSTONE_HOST/dashboard/auth/websso/ federation/sso_callback_template: value: /etc/keystone/sso_callback_template.html federation/remote_id_attribute: value: MELLON_IDP",
"-e /home/stack/fed_deployment/puppet_override_keystone.yaml --log-file overcloud_deployment_14.log &> overcloud_install.log",
"source ~/stackrc upload-swift-artifacts -f fed_deployment/rhsso_config.tar.gz",
"./overcloud_deploy.sh",
"cookie SERVERID insert indirect nocache",
"server controller-0 cookie controller-0 server controller-1 cookie controller-1",
"openstack domain create federated_domain openstack project create --domain federated_domain federated_project openstack group create federated_users --domain federated_domain openstack role add --group federated_users --group-domain federated_domain --domain federated_domain _member_ openstack role add --group federated_users --group-domain federated_domain --project federated_project _member_",
"openstack identity provider create --remote-id USDFED_IDP_ENTITY_ID USDFED_OPENSTACK_IDP_NAME",
"[ { \"local\": [ { \"user\": { \"name\": \"{0}\" }, \"group\": { \"domain\": { \"name\": \"federated_domain\" }, \"name\": \"federated_users\" } } ], \"remote\": [ { \"type\": \"MELLON_NAME_ID\" }, { \"type\": \"MELLON_groups\", \"any_one_of\": [\"openstack-users\"] } ] } ]",
"openstack mapping create --rules fed_deployment/mapping_rhsso_saml2.json USDFED_OPENSTACK_MAPPING_NAME",
"./configure-federation create-mapping ./configure-federation openstack-create-mapping",
"openstack federation protocol create --identity-provider USDFED_OPENSTACK_IDP_NAME --mapping USDFED_OPENSTACK_MAPPING_NAME mapped\"",
"<VirtualHost> ServerName https:USDFED_KEYSTONE_HOST:USDFED_KEYSTONE_HTTPS_PORT UseCanonicalName On </VirtualHost>",
"OPENSTACK_KEYSTONE_URL = \"https://USDFED_KEYSTONE_HOST:USDFED_KEYSTONE_HTTPS_PORT/v3\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"_member_\" WEBSSO_ENABLED = True WEBSSO_INITIAL_CHOICE = \"mapped\" WEBSSO_CHOICES = ( (\"mapped\", _(\"RH-SSO\")), (\"credentials\", _(\"Keystone Credentials\")), )",
"#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/federate_with_identity_service/steps |
Chapter 5. Installing a cluster on vSphere using the Agent-based Installer | Chapter 5. Installing a cluster on vSphere using the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster with an available release image. 5.1. Additional resources Preparing to install with the Agent-based Installer | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_vmware_vsphere/installing-vsphere-agent-based-installer |
Chapter 20. Dataset | Chapter 20. Dataset Both producer and consumer are supported Testing of distributed and asynchronous processing is notoriously difficult. The Mock , Test and DataSet endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel's large range of Components together with the powerful Bean Integration. The DataSet component provides a mechanism to easily perform load & soak testing of your system. It works by allowing you to create DataSet instances both as a source of messages and as a way to assert that the data set is received. Camel will use the throughput logger when sending datasets. 20.1. URI format Where name is used to find the DataSet instance in the Registry Camel ships with a support implementation of org.apache.camel.component.dataset.DataSet , the org.apache.camel.component.dataset.DataSetSupport class, that can be used as a base for implementing your own DataSet. Camel also ships with some implementations that can be used for testing: org.apache.camel.component.dataset.SimpleDataSet , org.apache.camel.component.dataset.ListDataSet and org.apache.camel.component.dataset.FileDataSet , all of which extend DataSetSupport . 20.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 20.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 20.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 20.3. Component Options The Dataset component supports 5 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean exchangeFormatter (advanced) Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. ExchangeFormatter 20.4. Endpoint Options The Dataset endpoint is configured using URI syntax: with the following path and query parameters: 20.4.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of DataSet to lookup in the registry. DataSet 20.4.2. Query Parameters (21 parameters) Name Description Default Type dataSetIndex (common) Controls the behaviour of the CamelDataSetIndex header. For Consumers: - off = the header will not be set - strict/lenient = the header will be set For Producers: - off = the header value will not be verified, and will not be set if it is not present = strict = the header value must be present and will be verified = lenient = the header value will be verified if it is present, and will be set if it is not present. Enum values: strict lenient off lenient String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean initialDelay (consumer) Time period in millis to wait before starting sending messages. 1000 long minRate (consumer) Wait until the DataSet contains at least this number of messages. 0 int preloadSize (consumer) Sets how many messages should be preloaded (sent) before the route completes its initialization. 0 long produceDelay (consumer) Allows a delay to be specified which causes a delay when a message is sent by the consumer (to simulate slow processing). 3 long exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern assertPeriod (producer) Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default this period is disabled. long consumeDelay (producer) Allows a delay to be specified which causes a delay when a message is consumed by the producer (to simulate slow processing). 0 long expectedCount (producer) Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n'th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. -1 int failFast (producer) Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean reportGroup (producer) A number that is used to turn on throughput logging based on groups of the size. int resultMinimumWaitTime (producer) Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long resultWaitTime (producer) Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long retainFirst (producer) Specifies to only retain the first n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int retainLast (producer) Specifies to only retain the last n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int sleepForEmptyTest (producer) Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero. long copyOnExchange (producer (advanced)) Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. true boolean 20.5. Configuring DataSet Camel will lookup in the Registry for a bean implementing the DataSet interface. So you can register your own DataSet as: <bean id="myDataSet" class="com.mycompany.MyDataSet"> <property name="size" value="100"/> </bean> 20.6. Example For example, to test that a set of messages are sent to a queue and then consumed from the queue without losing any messages: // send the dataset to a queue from("dataset:foo").to("activemq:SomeQueue"); // now lets test that the messages are consumed correctly from("activemq:SomeQueue").to("dataset:foo"); The above would look in the Registry to find the foo DataSet instance which is used to create the messages. Then you create a DataSet implementation, such as using the SimpleDataSet as described below, configuring things like how big the data set is and what the messages look like etc. 20.7. DataSetSupport (abstract class) The DataSetSupport abstract class is a nice starting point for new DataSets, and provides some useful features to derived classes. 20.7.1. Properties on DataSetSupport Property Type Default Description defaultHeaders Map<String,Object> null Specifies the default message body. For SimpleDataSet it is a constant payload; though if you want to create custom payloads per message, create your own derivation of DataSetSupport . outputTransformer org.apache.camel.Processor null size long 10 Specifies how many messages to send/consume. reportCount long -1 Specifies the number of messages to be received before reporting progress. Useful for showing progress of a large load test. If < 0, then size / 5, if is 0 then size , else set to reportCount value. 20.8. SimpleDataSet The SimpleDataSet extends DataSetSupport , and adds a default body. 20.8.1. Additional Properties on SimpleDataSet Property Type Default Description defaultBody Object <hello>world!</hello> Specifies the default message body. By default, the SimpleDataSet produces the same constant payload for each exchange. If you want to customize the payload for each exchange, create a Camel Processor and configure the SimpleDataSet to use it by setting the outputTransformer property. 20.9. ListDataSet The List`DataSet` extends DataSetSupport , and adds a list of default bodies. 20.9.1. Additional Properties on ListDataSet Property Type Default Description defaultBodies List<Object> empty LinkedList<Object> Specifies the default message body. By default, the ListDataSet selects a constant payload from the list of defaultBodies using the CamelDataSetIndex . If you want to customize the payload, create a Camel Processor and configure the ListDataSet to use it by setting the outputTransformer property. size long the size of the defaultBodies list Specifies how many messages to send/consume. This value can be different from the size of the defaultBodies list. If the value is less than the size of the defaultBodies list, some of the list elements will not be used. If the value is greater than the size of the defaultBodies list, the payload for the exchange will be selected using the modulus of the CamelDataSetIndex and the size of the defaultBodies list (i.e. CamelDataSetIndex % defaultBodies.size() ) 20.10. FileDataSet The FileDataSet extends ListDataSet , and adds support for loading the bodies from a file. 20.10.1. Additional Properties on FileDataSet Property Type Default Description sourceFile File null Specifies the source file for payloads delimiter String \z Specifies the delimiter pattern used by a java.util.Scanner to split the file into multiple payloads. 20.11. Spring Boot Auto-Configuration When using dataset with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataset-starter</artifactId> </dependency> The component supports 11 options, which are listed below. Name Description Default Type camel.component.dataset-test.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.dataset-test.enabled Whether to enable auto configuration of the dataset-test component. This is enabled by default. Boolean camel.component.dataset-test.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.dataset-test.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.dataset-test.log To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false Boolean camel.component.dataset.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.dataset.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.dataset.enabled Whether to enable auto configuration of the dataset component. This is enabled by default. Boolean camel.component.dataset.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.dataset.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.dataset.log To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false Boolean | [
"dataset:name[?options]",
"dataset:name",
"<bean id=\"myDataSet\" class=\"com.mycompany.MyDataSet\"> <property name=\"size\" value=\"100\"/> </bean>",
"// send the dataset to a queue from(\"dataset:foo\").to(\"activemq:SomeQueue\"); // now lets test that the messages are consumed correctly from(\"activemq:SomeQueue\").to(\"dataset:foo\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataset-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-dataset-component-starter |
22.3. Connecting to a Samba Share | 22.3. Connecting to a Samba Share You can use Nautilus to view available Samba shares on your network. Select Main Menu Button (on the Panel) => Network Servers to view a list of Samba workgroups on your network. You can also type smb: in the Location: bar of Nautilus to view the workgroups. As shown in Figure 22.6, "SMB Workgroups in Nautilus" , an icon appears for each available SMB workgroup on the network. Figure 22.6. SMB Workgroups in Nautilus Double-click one of the workgroup icons to view a list of computers within the workgroup. Figure 22.7. SMB Machines in Nautilus As you can see from Figure 22.7, "SMB Machines in Nautilus" , there is an icon for each machine within the workgroup. Double-click on an icon to view the Samba shares on the machine. If a username and password combination is required, you are prompted for them. Alternately, you can also specify the Samba server and sharename in the Location: bar for Nautilus using the following syntax (replace <servername> and <sharename> with the appropriate values): 22.3.1. Command Line To query the network for Samba servers, use the findsmb command. For each server found, it displays its IP address, NetBIOS name, workgroup name, operating system, and SMB server version. To connect to a Samba share from a shell prompt, type the following command: Replace <hostname> with the hostname or IP address of the Samba server you want to connect to, <sharename> with the name of the shared directory you want to browse, and <username> with the Samba username for the system. Enter the correct password or press Enter if no password is required for the user. If you see the smb:\> prompt, you have successfully logged in. Once you are logged in, type help for a list of commands. If you wish to browse the contents of your home directory, replace sharename with your username. If the -U switch is not used, the username of the current user is passed to the Samba server. To exit smbclient , type exit at the smb:\> prompt. | [
"smb:// <servername> / <sharename> /",
"smbclient // <hostname> / <sharename> -U <username>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Samba-Connecting_to_a_Samba_Share |
Chapter 10. IO Subsystem Tuning | Chapter 10. IO Subsystem Tuning The io subsystem defines XNIO workers and buffer pools that are used by other JBoss EAP subsystems, such as Undertow and Remoting. 10.1. Configuring Workers You can create multiple separate workers that each have their own performance configuration and which handle different I/O tasks. For example, you could create one worker to handle HTTP I/O, and another worker to handle Jakarta Enterprise Beans I/O, and then separately configure the attributes of each worker for specific load requirements. See the IO Subsystem Attributes appendix for the list of configurable worker attributes. Worker attributes that significantly affect performance include io-threads which sets the total number of I/O threads that a worker can use, and task-max-threads which sets the maximum number of threads that can be used for a particular task. The defaults for these two attributes are calculated based on the server's CPU count. See the JBoss EAP Configuration Guide for instructions on how to create and configure workers . 10.1.1. Monitoring Worker Statistics You can view a worker's runtime statistics using the management CLI. This exposes worker statistics such as connection count, thread count, and queue size. The following command displays runtime statistics for the default worker: Note The number of core threads, which is tracked by the core-pool-size statistic, is currently always set to the same value as the maximum number of threads, which is tracked by the max-pool-size statistic. 10.2. Configuring Buffer Pools Note IO buffer pools are deprecated, but they are still set as the default in the current release. For more information about configuring Undertow byte buffer pools, see the Configuring Byte Buffer Pools section of the Configuration Guide for JBoss EAP. A buffer pool in the io subsystem is a pooled NIO buffer instance that is used specifically for I/O operations. Like workers , you can create separate buffer pools which can be dedicated to handle specific I/O tasks. See the IO Subsystem Attributes appendix for the list of configurable buffer pool attributes. The main buffer pool attribute that significantly affects performance is buffer-size . The default is calculated based on the RAM resources of your system, and is sufficient in most cases. If you are configuring this attribute manually, an ideal size for most servers is 16KB. See the JBoss EAP Configuration Guide for instructions on how to create and configure buffer pools . | [
"/subsystem=io/worker= default :read-resource(include-runtime=true,recursive=true)"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/performance_tuning_guide/io_tuning |
Chapter 2. AlertingRule [monitoring.openshift.io/v1] | Chapter 2. AlertingRule [monitoring.openshift.io/v1] Description AlertingRule represents a set of user-defined Prometheus rule groups containing alerting rules. This resource is the supported method for cluster admins to create alerts based on metrics recorded by the platform monitoring stack in OpenShift, i.e. the Prometheus instance deployed to the openshift-monitoring namespace. You might use this to create custom alerting rules not shipped with OpenShift based on metrics from components such as the node_exporter, which provides machine-level metrics such as CPU usage, or kube-state-metrics, which provides metrics on Kubernetes usage. The API is mostly compatible with the upstream PrometheusRule type from the prometheus-operator. The primary difference being that recording rules are not allowed here - only alerting rules. For each AlertingRule resource created, a corresponding PrometheusRule will be created in the openshift-monitoring namespace. OpenShift requires admins to use the AlertingRule resource rather than the upstream type in order to allow better OpenShift specific defaulting and validation, while not modifying the upstream APIs directly. You can find upstream API documentation for PrometheusRule resources here: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec describes the desired state of this AlertingRule object. status object status describes the current state of this AlertOverrides object. 2.1.1. .spec Description spec describes the desired state of this AlertingRule object. Type object Required groups Property Type Description groups array groups is a list of grouped alerting rules. Rule groups are the unit at which Prometheus parallelizes rule processing. All rules in a single group share a configured evaluation interval. All rules in the group will be processed together on this interval, sequentially, and all rules will be processed. It's common to group related alerting rules into a single AlertingRule resources, and within that resource, closely related alerts, or simply alerts with the same interval, into individual groups. You are also free to create AlertingRule resources with only a single rule group, but be aware that this can have a performance impact on Prometheus if the group is extremely large or has very complex query expressions to evaluate. Spreading very complex rules across multiple groups to allow them to be processed in parallel is also a common use-case. groups[] object RuleGroup is a list of sequentially evaluated alerting rules. 2.1.2. .spec.groups Description groups is a list of grouped alerting rules. Rule groups are the unit at which Prometheus parallelizes rule processing. All rules in a single group share a configured evaluation interval. All rules in the group will be processed together on this interval, sequentially, and all rules will be processed. It's common to group related alerting rules into a single AlertingRule resources, and within that resource, closely related alerts, or simply alerts with the same interval, into individual groups. You are also free to create AlertingRule resources with only a single rule group, but be aware that this can have a performance impact on Prometheus if the group is extremely large or has very complex query expressions to evaluate. Spreading very complex rules across multiple groups to allow them to be processed in parallel is also a common use-case. Type array 2.1.3. .spec.groups[] Description RuleGroup is a list of sequentially evaluated alerting rules. Type object Required name rules Property Type Description interval string interval is how often rules in the group are evaluated. If not specified, it defaults to the global.evaluation_interval configured in Prometheus, which itself defaults to 30 seconds. You can check if this value has been modified from the default on your cluster by inspecting the platform Prometheus configuration: The relevant field in that resource is: spec.evaluationInterval name string name is the name of the group. rules array rules is a list of sequentially evaluated alerting rules. Prometheus may process rule groups in parallel, but rules within a single group are always processed sequentially, and all rules are processed. rules[] object Rule describes an alerting rule. See Prometheus documentation: - https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules 2.1.4. .spec.groups[].rules Description rules is a list of sequentially evaluated alerting rules. Prometheus may process rule groups in parallel, but rules within a single group are always processed sequentially, and all rules are processed. Type array 2.1.5. .spec.groups[].rules[] Description Rule describes an alerting rule. See Prometheus documentation: - https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules Type object Required alert expr Property Type Description alert string alert is the name of the alert. Must be a valid label value, i.e. may contain any Unicode character. annotations object (string) annotations to add to each alert. These are values that can be used to store longer additional information that you won't query on, such as alert descriptions or runbook links. expr integer-or-string expr is the PromQL expression to evaluate. Every evaluation cycle this is evaluated at the current time, and all resultant time series become pending or firing alerts. This is most often a string representing a PromQL expression, e.g.: mapi_current_pending_csr > mapi_max_pending_csr In rare cases this could be a simple integer, e.g. a simple "1" if the intent is to create an alert that is always firing. This is sometimes used to create an always-firing "Watchdog" alert in order to ensure the alerting pipeline is functional. for string for is the time period after which alerts are considered firing after first returning results. Alerts which have not yet fired for long enough are considered pending. labels object (string) labels to add or overwrite for each alert. The results of the PromQL expression for the alert will result in an existing set of labels for the alert, after evaluating the expression, for any label specified here with the same name as a label in that set, the label here wins and overwrites the value. These should typically be short identifying values that may be useful to query against. A common example is the alert severity, where one sets severity: warning under the labels key: 2.1.6. .status Description status describes the current state of this AlertOverrides object. Type object Property Type Description observedGeneration integer observedGeneration is the last generation change you've dealt with. prometheusRule object prometheusRule is the generated PrometheusRule for this AlertingRule. Each AlertingRule instance results in a generated PrometheusRule object in the same namespace, which is always the openshift-monitoring namespace. 2.1.7. .status.prometheusRule Description prometheusRule is the generated PrometheusRule for this AlertingRule. Each AlertingRule instance results in a generated PrometheusRule object in the same namespace, which is always the openshift-monitoring namespace. Type object Required name Property Type Description name string name of the referenced PrometheusRule. 2.2. API endpoints The following API endpoints are available: /apis/monitoring.openshift.io/v1/alertingrules GET : list objects of kind AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules DELETE : delete collection of AlertingRule GET : list objects of kind AlertingRule POST : create an AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name} DELETE : delete an AlertingRule GET : read the specified AlertingRule PATCH : partially update the specified AlertingRule PUT : replace the specified AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name}/status GET : read status of the specified AlertingRule PATCH : partially update status of the specified AlertingRule PUT : replace status of the specified AlertingRule 2.2.1. /apis/monitoring.openshift.io/v1/alertingrules Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind AlertingRule Table 2.2. HTTP responses HTTP code Reponse body 200 - OK AlertingRuleList schema 401 - Unauthorized Empty 2.2.2. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules Table 2.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of AlertingRule Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertingRule Table 2.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK AlertingRuleList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertingRule Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. Body parameters Parameter Type Description body AlertingRule schema Table 2.11. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 202 - Accepted AlertingRule schema 401 - Unauthorized Empty 2.2.3. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the AlertingRule namespace string object name and auth scope, such as for teams and projects Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an AlertingRule Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertingRule Table 2.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.18. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertingRule Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.20. Body parameters Parameter Type Description body Patch schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertingRule Table 2.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.23. Body parameters Parameter Type Description body AlertingRule schema Table 2.24. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 401 - Unauthorized Empty 2.2.4. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name}/status Table 2.25. Global path parameters Parameter Type Description name string name of the AlertingRule namespace string object name and auth scope, such as for teams and projects Table 2.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified AlertingRule Table 2.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.28. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AlertingRule Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.30. Body parameters Parameter Type Description body Patch schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AlertingRule Table 2.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.33. Body parameters Parameter Type Description body AlertingRule schema Table 2.34. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/monitoring_apis/alertingrule-monitoring-openshift-io-v1 |
Chapter 12. File System Check | Chapter 12. File System Check File systems may be checked for consistency, and optionally repaired, with file system-specific userspace tools. These tools are often referred to as fsck tools, where fsck is a shortened version of file system check . Note These file system checkers only guarantee metadata consistency across the file system; they have no awareness of the actual data contained within the file system and are not data recovery tools. File system inconsistencies can occur for various reasons, including but not limited to hardware errors, storage administration errors, and software bugs. Before modern metadata-journaling file systems became common, a file system check was required any time a system crashed or lost power. This was because a file system update could have been interrupted, leading to an inconsistent state. As a result, a file system check is traditionally run on each file system listed in /etc/fstab at boot-time. For journaling file systems, this is usually a very short operation, because the file system's metadata journaling ensures consistency even after a crash. However, there are times when a file system inconsistency or corruption may occur, even for journaling file systems. When this happens, the file system checker must be used to repair the file system. The following provides best practices and other useful information when performing this procedure. Important Red Hat does not recommend this unless the machine does not boot, the file system is extremely large, or the file system is on remote storage. It is possible to disable file system check at boot by setting the sixth field in /etc/fstab to 0 . 12.1. Best Practices for fsck Generally, running the file system check and repair tool can be expected to automatically repair at least some of the inconsistencies it finds. In some cases, severely damaged inodes or directories may be discarded if they cannot be repaired. Significant changes to the file system may occur. To ensure that unexpected or undesirable changes are not permanently made, perform the following precautionary steps: Dry run Most file system checkers have a mode of operation which checks but does not repair the file system. In this mode, the checker prints any errors that it finds and actions that it would have taken, without actually modifying the file system. Note Later phases of consistency checking may print extra errors as it discovers inconsistencies which would have been fixed in early phases if it were running in repair mode. Operate first on a file system image Most file systems support the creation of a metadata image , a sparse copy of the file system which contains only metadata. Because file system checkers operate only on metadata, such an image can be used to perform a dry run of an actual file system repair, to evaluate what changes would actually be made. If the changes are acceptable, the repair can then be performed on the file system itself. Note Severely damaged file systems may cause problems with metadata image creation. Save a file system image for support investigations A pre-repair file system metadata image can often be useful for support investigations if there is a possibility that the corruption was due to a software bug. Patterns of corruption present in the pre-repair image may aid in root-cause analysis. Operate only on unmounted file systems A file system repair must be run only on unmounted file systems. The tool must have sole access to the file system or further damage may result. Most file system tools enforce this requirement in repair mode, although some only support check-only mode on a mounted file system. If check-only mode is run on a mounted file system, it may find spurious errors that would not be found when run on an unmounted file system. Disk errors File system check tools cannot repair hardware problems. A file system must be fully readable and writable if repair is to operate successfully. If a file system was corrupted due to a hardware error, the file system must first be moved to a good disk, for example with the dd(8) utility. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-fsck |
Release Notes | Release Notes Red Hat Single Sign-On 7.4 For Use with Red Hat Single Sign-On 7.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/release_notes/index |
Chapter 24. LDAP authentication | Chapter 24. LDAP authentication Administrators use the Lightweight Directory Access Protocol (LDAP) as a source for account authentication information for automation controller users. User authentication is provided, but not the synchronization of user permissions and credentials. Organization membership and team membership can be synchronized by the organization administrator. 24.1. Setting up LDAP authentication When configured, a user who logs in with an LDAP username and password automatically has an automation controller account created for them. They can be automatically placed into organizations as either regular users or organization administrators. Users created in the user interface (Local) take precedence over those logging into automation controller for their first time with an alternative authentication solution. You must delete the local user if you want to re-use with another authentication method, such as LDAP. Users created through an LDAP login cannot change their username, given name, surname, or set a local password for themselves. You can also configure this to restrict editing of other field names. Note If the LDAP server you want to connect to has a certificate that is self-signed or signed by a corporate internal certificate authority (CA), you must add the CA certificate to the system's trusted CAs. Otherwise, connection to the LDAP server results in an error that the certificate issuer is not recognized. For more information, see Importing a certificate authority in automation controller for LDAPS integration . If prompted, use your Red Hat customer credentials to login. Procedure Create a user in LDAP that has access to read the entire LDAP structure. Use the ldapsearch command to test if you can make successful queries to the LDAP server. You can install this tool from automation controller's system command line, and by using other Linux and OSX systems. Example In this example, CN=josie,CN=users,DC=website,DC=com is the distinguished name of the connecting user. Note The ldapsearch utility is not automatically pre-installed with automation controller. However, you can install it from the openldap-clients package. From the navigation panel, select Settings in the automation controller UI. Select LDAP settings in the list of Authentication options. You do not need multiple LDAP configurations per LDAP server, but you can configure many LDAP servers from this page, otherwise, leave the server at Default . The equivalent API endpoints show AUTH_LDAP_* repeated: AUTH_LDAP_1_* , AUTH_LDAP_2_* , AUTH_LDAP_5_* to denote server designations. To enter or change the LDAP server address, click Edit and enter in the LDAP Server URI field by using the same format as the one pre-populated in the text field. Note You can specify multiple LDAP servers by separating each with spaces or commas. Click the icon to comply with the correct syntax and rules. Enter the password to use for the binding user in the LDAP Bind Password text field. For more information about LDAP variables, see Ansible automation hub variables . Click to select a group type from the LDAP Group Type list. The LDAP group types that are supported by automation controller use the underlying django-auth-ldap library . To specify the parameters for the selected group type, see Step 15. The LDAP Start TLS is disabled by default. To enable TLS when the LDAP connection is not using SSL/TLS, set the toggle to On . Enter the distinguished name in the LDAP Bind DN text field to specify the user that automation controller uses to connect (Bind) to the LDAP server. If that name is stored in key sAMAccountName , the LDAP User DN Template is populated from (sAMAccountName=%(user)s) . Active Directory stores the username to sAMAccountName . For OpenLDAP, the key is uid and the line becomes (uid=%(user)s) . Enter the distinguished group name to enable users within that group to access automation controller in the LDAP Require Group field, using the same format as the one shown in the text field, CN=controller Users,OU=Users,DC=website,DC=com . Enter the distinguished group name to prevent users within that group from accessing automation controller in the LDAP Deny Group field, using the same format as the one shown in the text field. Enter where to search for users while authenticating in the LDAP User Search field by using the same format as the one shown in the text field. In this example, use: [ "OU=Users,DC=website,DC=com", "SCOPE_SUBTREE", "(cn=%(user)s)" ] The first line specifies where to search for users in the LDAP tree. In the earlier example, the users are searched recursively starting from DC=website,DC=com . The second line specifies the scope where the users should be searched: SCOPE_BASE : Use this value to indicate searching only the entry at the base DN, resulting in only that entry being returned. SCOPE_ONELEVEL : Use this value to indicate searching all entries one level under the base DN, but not including the base DN and not including any entries under that one level under the base DN. SCOPE_SUBTREE : Use this value to indicate searching of all entries at all levels under and including the specified base DN. The third line specifies the key name where the user name is stored. For many search queries, use the following correct syntax: [ [ "OU=Users,DC=northamerica,DC=acme,DC=com", "SCOPE_SUBTREE", "(sAMAccountName=%(user)s)" ], [ "OU=Users,DC=apac,DC=corp,DC=com", "SCOPE_SUBTREE", "(sAMAccountName=%(user)s)" ], [ "OU=Users,DC=emea,DC=corp,DC=com", "SCOPE_SUBTREE", "(sAMAccountName=%(user)s)" ] ] In the LDAP Group Search text field, specify which groups to search and how to search them. In this example, use: [ "dc=example,dc=com", "SCOPE_SUBTREE", "(objectClass=group)" ] The first line specifies the BASE DN where the groups should be searched. The second line specifies the scope and is the same as that for the user directive. The third line specifies what the objectClass of a group object is in the LDAP that you are using. Enter the user attributes in the LDAP User Attribute Map the text field. In this example, use: { "first_name": "givenName", "last_name": "sn", "email": "mail" } The earlier example retrieves users by surname from the key sn . You can use the same LDAP query for the user to decide what keys they are stored under. Depending on the selected LDAP Group Type , different parameters are available in the LDAP Group Type Parameters field to account for this. LDAP_GROUP_TYPE_PARAMS is a dictionary that is converted by automation controller to kwargs and passed to the LDAP Group Type class selected. There are two common parameters used by any of the LDAP Group Type ; name_attr and member_attr . Where name_attr defaults to cn and member_attr defaults to member: {"name_attr": "cn", "member_attr": "member"} To find what parameters a specific LDAP Group Type expects, see the django_auth_ldap documentation around the classes init parameters. Enter the user profile flags in the LDAP User Flags by Group text field. The following example uses the syntax to set LDAP users as "Superusers" and "Auditors": { "is_superuser": "cn=superusers,ou=groups,dc=website,dc=com", "is_system_auditor": "cn=auditors,ou=groups,dc=website,dc=com" } For more information about completing the mapping fields, LDAP Organization Map and LDAP Team Map , see the LDAP Organization and team mapping section. Click Save . Note Automation controller does not actively synchronize users, but they are created during their initial login. To improve performance associated with LDAP authentication, see Preventing LDAP attributes from updating on each login . 24.1.1. LDAP organization and team mapping You can control which users are placed into which automation controller organizations based on LDAP attributes (mapping out between your organization administrators, users and LDAP groups). Keys are organization names. Organizations are created if not present. Values are dictionaries defining the options for each organization's membership. For each organization, you can specify what groups are automatically users of the organization and also what groups can administer the organization. admins : none , true , false , string or list/tuple of strings: If none , organization administrators are not updated based on LDAP values. If true , all users in LDAP are automatically added as administrators of the organization. If false , no LDAP users are automatically added as administrators of the organization. If a string or list of strings specifies the group DNs that are added to the organization if they match any of the specified groups. remove_admins: True/False. Defaults to False : When true , a user who is not a member of the given group is removed from the organization's administrative list. users : none , true , false , string or list/tuple of strings. The same rules apply as for administrators. remove_users : true or false . Defaults to false . The same rules apply as for administrators. Example When mapping between users and LDAP groups, keys are team names and are created if not present. Values are dictionaries of options for each team's membership, where each can contain the following parameters: organization : string . The name of the organization to which the team belongs. The team is created if the combination of organization and team name does not exist. The organization is first created if it does not exist. users : none , true , false , string , or list/tuple of strings: If none , team members are not updated. If true or false , all LDAP users are added or removed as team members. If a string or list of strings specifies the group DNs, the user is added as a team member if the user is a member of any of these groups. remove : true or false . Defaults to false . When true , a user who is not a member of the given group is removed from the team. Example 24.1.2. Enabling logging for LDAP To enable logging for LDAP, you must set the level to DEBUG in the Settings configuration window: Procedure From the navigation panel, select Settings . Select Logging settings from the list of System options. Click Edit . Set the Logging Aggregator Level Threshold field to DEBUG . Click Save . 24.1.3. Preventing LDAP attributes from updating on each login By default, when an LDAP user authenticates, all user-related attributes are updated in the database on each login. In some environments, you can skip this operation due to performance issues. To avoid it, you can disable the option AUTH_LDAP_ALWAYS_UPDATE_USER . Warning Set this option to false to not update the LDAP user's attributes. Attributes are only updated the first time the user is created. Procedure Create a custom file under /etc/tower/conf.d/custom-ldap.py with the following contents. If you have multiple nodes, execute it on all nodes: AUTH_LDAP_ALWAYS_UPDATE_USER = False Restart automation controller on all nodes: automation-controller-service restart With this option set to False , no changes to LDAP user's attributes are pushed to automation controller. Note that new users are created and their attributes are pushed to the database on their first login. By default, an LDAP user gets their attributes updated in the database upon each login. For a playbook that runs multiple times with an LDAP credential, those queries can be avoided. Verification Check the PostgreSQL for slow queries related to the LDAP authentication. Additional resources For more information, see AUTH_LDAP_ALWAYS_UPDATE_USER of the Django documentation. 24.1.4. Importing a certificate authority in automation controller for LDAPS integration You can authenticate to the automation controller server by using LDAP, but if you change to using LDAPS (LDAP over SSL/TLS) to authenticate, it fails with one of the following errors: 2020-04-28 17:25:36,184 WARNING django_auth_ldap Caught LDAPError while authenticating e079127: SERVER_DOWN({'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (unable to get issuer certificate)', 'desc': "Can't contact LDAP server"},) 2020-06-02 11:48:24,840 WARNING django_auth_ldap Caught LDAPError while authenticating reinernippes: SERVER_DOWN({'desc': "Can't contact LDAP server", 'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (certificate has expired)'},) Note By default, django_auth_ldap verifies SSL connections before starting an LDAPS transaction. When you receive a certificate verify failed error, this means that the django_auth_ldap could not verify the certificate. When the SSL/TLS connection cannot be verified, the connection attempt is halted. Procedure To import an LDAP CA, run the following commands: cp ldap_server-CA.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust Note Run these two commands on all automation controller nodes in a clustered setup. 24.1.5. Referrals Active Directory uses "referrals" in case the queried object is not available in its database. This does not work correctly with the django LDAP client and it helps to disable referrals. Disable LDAP referrals by adding the following lines to your /etc/tower/conf.d/custom.py file: AUTH_LDAP_GLOBAL_OPTIONS = { ldap.OPT_REFERRALS: False, } 24.1.6. Changing the default timeout for authentication You can change the default length of time, in seconds, that your supplied token is valid in the Settings screen of the automation controller UI. Procedure From the navigation panel, select Settings . Select Miscellaneous Authentication settings from the list of System options. Click Edit . Enter the timeout period in seconds in the Idle Time Force Log Out text field. Click Save . Note If you access automation controller and have trouble logging in, clear your web browser's cache. In situations such as this, it is common for the authentication token to be cached during the browser session. You must clear it to continue. | [
"ldapsearch -x -H ldap://win -D \"CN=josie,CN=Users,DC=website,DC=com\" -b \"dc=website,dc=com\" -w Josie4Cloud",
"[ \"OU=Users,DC=website,DC=com\", \"SCOPE_SUBTREE\", \"(cn=%(user)s)\" ]",
"[ [ \"OU=Users,DC=northamerica,DC=acme,DC=com\", \"SCOPE_SUBTREE\", \"(sAMAccountName=%(user)s)\" ], [ \"OU=Users,DC=apac,DC=corp,DC=com\", \"SCOPE_SUBTREE\", \"(sAMAccountName=%(user)s)\" ], [ \"OU=Users,DC=emea,DC=corp,DC=com\", \"SCOPE_SUBTREE\", \"(sAMAccountName=%(user)s)\" ] ]",
"[ \"dc=example,dc=com\", \"SCOPE_SUBTREE\", \"(objectClass=group)\" ]",
"{ \"first_name\": \"givenName\", \"last_name\": \"sn\", \"email\": \"mail\" }",
"{\"name_attr\": \"cn\", \"member_attr\": \"member\"}",
"{ \"is_superuser\": \"cn=superusers,ou=groups,dc=website,dc=com\", \"is_system_auditor\": \"cn=auditors,ou=groups,dc=website,dc=com\" }",
"{ \"LDAP Organization\": { \"admins\": \"cn=engineering_admins,ou=groups,dc=example,dc=com\", \"remove_admins\": false, \"users\": [ \"cn=engineering,ou=groups,dc=example,dc=com\", \"cn=sales,ou=groups,dc=example,dc=com\", \"cn=it,ou=groups,dc=example,dc=com\" ], \"remove_users\": false }, \"LDAP Organization 2\": { \"admins\": [ \"cn=Administrators,cn=Builtin,dc=example,dc=com\" ], \"remove_admins\": false, \"users\": true, \"remove_users\": false } }",
"{ \"LDAP Engineering\": { \"organization\": \"LDAP Organization\", \"users\": \"cn=engineering,ou=groups,dc=example,dc=com\", \"remove\": true }, \"LDAP IT\": { \"organization\": \"LDAP Organization\", \"users\": \"cn=it,ou=groups,dc=example,dc=com\", \"remove\": true }, \"LDAP Sales\": { \"organization\": \"LDAP Organization\", \"users\": \"cn=sales,ou=groups,dc=example,dc=com\", \"remove\": true } }",
"AUTH_LDAP_ALWAYS_UPDATE_USER = False",
"automation-controller-service restart",
"2020-04-28 17:25:36,184 WARNING django_auth_ldap Caught LDAPError while authenticating e079127: SERVER_DOWN({'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (unable to get issuer certificate)', 'desc': \"Can't contact LDAP server\"},)",
"2020-06-02 11:48:24,840 WARNING django_auth_ldap Caught LDAPError while authenticating reinernippes: SERVER_DOWN({'desc': \"Can't contact LDAP server\", 'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (certificate has expired)'},)",
"cp ldap_server-CA.crt /etc/pki/ca-trust/source/anchors/",
"update-ca-trust",
"AUTH_LDAP_GLOBAL_OPTIONS = { ldap.OPT_REFERRALS: False, }"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/controller-ldap-authentication |
Chapter 3. Getting started | Chapter 3. Getting started This section provides a quick introduction to AMQ Interconnect by showing you how to install AMQ Interconnect, start the router with the default configuration settings, and distribute messages between two clients. 3.1. Installing AMQ Interconnect on Red Hat Enterprise Linux AMQ Interconnect is distributed as a set of RPM packages, which are available through your Red Hat subscription. Procedure Ensure your subscription has been activated and your system is registered. For more information about using the Customer Portal to activate your Red Hat subscription and register your system for packages, see Appendix A, Using your subscription . Subscribe to the required repositories: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Use the yum or dnf command to install the qpid-dispatch-router , qpid-dispatch-tools , and qpid-dispatch-console packages and their dependencies: Use the which command to verify that the qdrouterd executable is present. The qdrouterd executable should be located at /usr/sbin/qdrouterd . 3.2. Exploring the default router configuration file The router's configuration file ( qdrouterd.conf ) controls the way in which the router functions. The default configuration file contains the minimum number of settings required for the router to run. As you become more familiar with the router, you can add to or change these settings, or create your own configuration files. By default, the router configuration file defines the following settings for the router: Operating mode How it listens for incoming connections Routing patterns for the message routing mechanism Procedure Open the following file: /etc/qpid-dispatch/qdrouterd.conf . When AMQ Interconnect is installed, qdrouterd.conf is installed in this directory. When the router is started, it runs with the settings defined in this file. Review the default settings in qdrouterd.conf . Default configuration file 1 By default, the router operates in standalone mode. This means that it can only communicate with endpoints that are directly connected to it. It cannot connect to other routers, or participate in a router network. 2 The unique identifier of the router. This ID is used as the container-id (container name) at the AMQP protocol level. If it is not specified, the router shall generate a random identifier at startup. 3 The listener entity handles incoming connections from client endpoints. By default, the router listens on all network interfaces on the default AMQP port (5672). 4 By default, the router is configured to use the message routing mechanism. Each address entity defines how messages that are received with a particular address prefix should be distributed. For example, all messages with addresses that start with closest will be distributed using the closest distribution pattern. Note If a client requests a message with an address that is not defined in the router's configuration file, the balanced distribution pattern will be used automatically. Additional resources For more information about the router configuration file (including available entities and attributes), see the qdrouterd man page . 3.3. Starting the router After installing AMQ Interconnect, you start the router by using the qdrouterd command. Procedure Start the router: USD qdrouterd The router starts, using the default configuration file stored at /etc/qpid-dispatch/qdrouterd.conf . Review the qdrouterd command output to verify the router status. This example shows that the router was correctly installed, is running, and is ready to route traffic between clients: Additional resources The qdrouterd man page . 3.4. Sending test messages After starting the router, send some test messages to see how the router can connect two endpoints by distributing messages between them. This procedure demonstrates a simple configuration consisting of a single router with two clients connected to it: a sender and a receiver. The receiver wants to receive messages on a specific address, and the sender sends messages to that address. A broker is not used in this procedure, so there is no "store and forward" mechanism in the middle. Instead, the messages flow from the sender, through the router, to the receiver only if the receiver is online, and the sender can confirm that the messages have arrived at their destination. Prerequisites AMQ Python must be installed. For more information, see Using the AMQ Python Client . Procedure Navigate to the AMQ Python examples directory. USD cd <install-dir> /examples/python/ <install-dir> The directory where you installed AMQ Python. Start the simple_recv.py receiver client. USD python simple_recv.py -a 127.0.0.1:5672/examples -m 5 This command starts the receiver and listens on the examples address ( 127.0.0.1:5672/examples ). The receiver is also set to receive a maximum of five messages. Note In practice, the order in which you start senders and receivers does not matter. In both cases, messages will be sent as soon as the receiver comes online. In a new terminal window, navigate to the Python examples directory and run the simple_send.py example: USD cd <install-dir> /examples/python/ USD python simple_send.py -a 127.0.0.1:5672/examples -m 5 This command sends five auto-generated messages to the examples address ( 127.0.0.1:5672/examples ) and then confirms that they were delivered and acknowledged by the receiver: all messages confirmed Verify that the receiver client received the messages. The receiver client should display the contents of the five messages: {u'sequence': 1L} {u'sequence': 2L} {u'sequence': 3L} {u'sequence': 4L} {u'sequence': 5L} 3.5. steps After using AMQ Interconnect to distribute messages between two clients, you can use the following sections to learn more about AMQ Interconnect configuration, deployment, and management. Change the router's configuration AMQ Interconnect ships with default settings that are suitable for many basic use cases. You can further experiment with the standalone router that you used in the Getting started example by changing the router's essential properties, network connections, security settings, logging, and routing mechanisms. Install and configure AMQ Interconnect AMQ Interconnect is typically deployed in router networks. You can design a router network of any arbitrary topology to interconnect the endpoints in your messaging network. Monitor and manage AMQ Interconnect You can use the web console and command-line management tools to monitor the status and performance of the routers in your router network. | [
"sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-7-server-rpms --enable=amq-clients-2-for-rhel-7-server-rpms",
"sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-8-x86_64-rpms --enable=amq-clients-2-for-rhel-8-x86_64-rpms",
"sudo yum install qpid-dispatch-router qpid-dispatch-tools qpid-dispatch-console",
"which qdrouterd /usr/sbin/qdrouterd",
"router { mode: standalone 1 id: Router.A 2 } listener { 3 host: 0.0.0.0 port: amqp authenticatePeer: no } address { 4 prefix: closest distribution: closest } address { prefix: multicast distribution: multicast } address { prefix: unicast distribution: closest } address { prefix: exclusive distribution: closest } address { prefix: broadcast distribution: multicast }",
"qdrouterd",
"qdrouterd Fri May 20 09:38:03 2017 SERVER (info) Container Name: Router.A Fri May 20 09:38:03 2017 ROUTER (info) Router started in Standalone mode Fri May 20 09:38:03 2017 ROUTER (info) Router Core thread running. 0/Router.A Fri May 20 09:38:03 2017 ROUTER (info) In-process subscription M/USDmanagement Fri May 20 09:38:03 2017 AGENT (info) Activating management agent on USD_management_internal Fri May 20 09:38:03 2017 ROUTER (info) In-process subscription L/USDmanagement Fri May 20 09:38:03 2017 ROUTER (info) In-process subscription L/USD_management_internal Fri May 20 09:38:03 2017 DISPLAYNAME (info) Activating DisplayNameService on USDdisplayname Fri May 20 09:38:03 2017 ROUTER (info) In-process subscription L/USDdisplayname Fri May 20 09:38:03 2017 CONN_MGR (info) Configured Listener: 0.0.0.0:amqp proto=any role=normal Fri May 20 09:38:03 2017 POLICY (info) Policy configured maximumConnections: 0, policyFolder: '', access rules enabled: 'false' Fri May 20 09:38:03 2017 POLICY (info) Policy fallback defaultApplication is disabled Fri May 20 09:38:03 2017 SERVER (info) Operational, 4 Threads Running",
"cd <install-dir> /examples/python/",
"python simple_recv.py -a 127.0.0.1:5672/examples -m 5",
"cd <install-dir> /examples/python/ python simple_send.py -a 127.0.0.1:5672/examples -m 5",
"all messages confirmed",
"{u'sequence': 1L} {u'sequence': 2L} {u'sequence': 3L} {u'sequence': 4L} {u'sequence': 5L}"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/getting-started-router-rhel |
C.6. USB 3 / xHCI Support | C.6. USB 3 / xHCI Support USB 3 (xHCI) USB host adapter emulation is supported in Red Hat Enterprise Linux 7.2 and above. All USB speeds are available, meaning any generation of USB device can be plugged into a xHCI bus. Additionally, no companion controllers (for USB 1 devices) are required. Note, however, that USB 3 bulk streams are not supported. Advantages of xHCI: Virtualization-compatible hardware design, meaning xHCI emulation requires less CPU resources than versions due to reduced polling overhead. USB passthrough of USB 3 devices is available. Limitations of xHCI: Not supported for Red Hat Enterprise Linux 5 guests. See Figure 16.19, "Domain XML example for USB3/xHCI devices" for a domain XML device example for xHCI devices. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_guest_virtual_machine_compatibility-usb3.0-support |
Chapter 1. Adding Data Grid to your Maven repository | Chapter 1. Adding Data Grid to your Maven repository Data Grid Java distributions are available from Maven. You can download the Data Grid Maven repository from the customer portal or pull Data Grid dependencies from the public Red Hat Enterprise Maven repository. 1.1. Downloading the Maven repository Download and install the Data Grid Maven repository to a local file system, Apache HTTP server, or Maven repository manager if you do not want to use the public Red Hat Enterprise Maven repository. Procedure Log in to the Red Hat customer portal. Navigate to the Software Downloads for Data Grid . Download the Red Hat Data Grid 8.4 Maven Repository. Extract the archived Maven repository to your local file system. Open the README.md file and follow the appropriate installation instructions. 1.2. Adding Red Hat Maven repositories Include the Red Hat GA repository in your Maven build environment to get Data Grid artifacts and dependencies. Procedure Add the Red Hat GA repository to your Maven settings file, typically ~/.m2/settings.xml , or directly in the pom.xml file of your project. <repositories> <repository> <id>redhat-ga-repository</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </pluginRepository> </pluginRepositories> Reference Red Hat Enterprise Maven Repository 1.3. Configuring your project POM Configure Project Object Model (POM) files in your project to use Data Grid dependencies for embedded caches, Hot Rod clients, and other capabilities. Procedure Open your project pom.xml for editing. Define the version.infinispan property with the correct Data Grid version. Include the infinispan-bom in a dependencyManagement section. The Bill Of Materials (BOM) controls dependency versions, which avoids version conflicts and means you do not need to set the version for each Data Grid artifact you add as a dependency to your project. Save and close pom.xml . The following example shows the Data Grid version and BOM: <properties> <version.infinispan>14.0.21.Final-redhat-00001</version.infinispan> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-bom</artifactId> <version>USD{version.infinispan}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Steps Add Data Grid artifacts as dependencies to your pom.xml as required. | [
"<repositories> <repository> <id>redhat-ga-repository</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </pluginRepository> </pluginRepositories>",
"<properties> <version.infinispan>14.0.21.Final-redhat-00001</version.infinispan> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-bom</artifactId> <version>USD{version.infinispan}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/configuring-maven-repository |
Chapter 15. Uninstalling a cluster on GCP | Chapter 15. Uninstalling a cluster on GCP You can remove a cluster that you deployed to Google Cloud Platform (GCP). 15.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 15.2. Deleting GCP resources with the Cloud Credential Operator utility To clean up resources after uninstalling an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in manual mode with GCP Workload Identity, you can use the CCO utility ( ccoctl ) to remove the GCP resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Install an OpenShift Container Platform cluster with the CCO in manual mode with GCP Workload Identity. Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract --credentials-requests \ --cloud=gcp \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 USDRELEASE_IMAGE 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Delete the GCP resources that ccoctl created: USD ccoctl gcp delete \ --name=<name> \ 1 --project=<gcp_project_id> \ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <gcp_project_id> is the GCP project ID in which to delete cloud resources. Verification To verify that the resources are deleted, query GCP. For more information, refer to GCP documentation. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 USDRELEASE_IMAGE",
"ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_gcp/uninstalling-cluster-gcp |
Chapter 25. Schema for Red Hat Quay configuration | Chapter 25. Schema for Red Hat Quay configuration Most Red Hat Quay configuration information is stored in the config.yaml file. All configuration options are described in the Red Hat Quay Configuration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/quay-schema |
32.2.2. Adding a User | 32.2.2. Adding a User To add a user to the system: Issue the useradd command to create a locked user account: Unlock the account by issuing the passwd command to assign a password and set password aging guidelines: Command line options for useradd are detailed in Table 32.1, " useradd Command Line Options" . Table 32.1. useradd Command Line Options Option Description -c ' <comment> ' <comment> can be replaced with any string. This option is generally used to specify the full name of a user. -d <home-dir> Home directory to be used instead of default /home/ <username> / -e <date> Date for the account to be disabled in the format YYYY-MM-DD -f <days> Number of days after the password expires until the account is disabled. If 0 is specified, the account is disabled immediately after the password expires. If -1 is specified, the account is not be disabled after the password expires. -g <group-name> Group name or group number for the user's default group. The group must exist prior to being specified here. -G <group-list> List of additional (other than default) group names or group numbers, separated by commas, of which the user is a member. The groups must exist prior to being specified here. -m Create the home directory if it does not exist. -M Do not create the home directory. -n Do not create a user private group for the user. -r Create a system account with a UID less than 500 and without a home directory -p <password> The password encrypted with crypt -s User's login shell, which defaults to /bin/bash -u <uid> User ID for the user, which must be unique and greater than 499 | [
"useradd <username>",
"passwd <username>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s2-users-add |
Migrating Fuse 7 Applications to Red Hat build of Apache Camel for Quarkus | Migrating Fuse 7 Applications to Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel 4.8 Migrating Fuse 7 Applications to Red Hat build of Apache Camel for Quarkus | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_fuse_7_applications_to_red_hat_build_of_apache_camel_for_quarkus/index |
Chapter 4. Deploying the configured back ends | Chapter 4. Deploying the configured back ends To deploy the configured back ends, complete the following steps: Procedure Log in as the stack user. Run the following command to deploy the custom back end configuration: Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information, see Modifying the overcloud environment in the Director Installation and Usage guide. | [
"openstack overcloud deploy --templates -e /home/stack/templates/custom-env.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/custom_block_storage_back_end_deployment_guide/proc_deploying-configured-back-ends_custom-cinder-back-end |
7.21. certmonger | 7.21. certmonger 7.21.1. RHBA-2015:1379 - certmonger bug fix and enhancement update Updated certmonger packages that fix two bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The certmonger service monitors certificates, warns of their impending expiration, and optionally attempts to renew certificates by enrolling the system with a certificate authority (CA). Bug Fixes BZ# 1163023 Prior to this update, after the user upgraded from Red Hat Enterprise Linux 6.5 to Red Hat Enterprise Linux 6.6 and rebooted the system, certmonger in some cases erroneously exited shortly after starting or performed a series of unnecessary checks for new certificates. A patch has been applied to fix this bug, and these problems no longer occur in the described situation. BZ# 1178190 Previously, the "getcert list" command did not display the "pre-save command" and "post-save command" values. As a consequence, running "getcert list" could return incomplete results. With this update, the problem has been fixed, and running "getcert list" displays the "pre-save command" and "post-save command" values as expected. Enhancements BZ# 1161768 The certmonger service now supports the Simple Certificate Enrollment Protocol (SCEP). For obtaining certificates from servers, the user can now offer enrollment over SCEP. BZ# 1169806 Requesting a certificate using the getcert utility during an IdM client kickstart enrollment no longer requires certmonger to be running. Previously, an attempt to do this failed because certmonger was not running. With this update, getcert can successfully request a certificate in the described situation, on the condition that the D-Bus daemon is not running. Note that certmonger requires a system reboot to start monitoring the certificate obtained in this way. BZ# 1222595 Previously, after the user ran the "getcert list" command, the output included the PIN value if it was set for the certificate. Consequently, the user could unintentionally expose the PIN, for example by publicly sharing the output of the command. With this update, the "getcert list" output only contains a note that a PIN is set for the certificate. As a result, the PIN value itself is no longer displayed in the "getcert list" output. Users of certmonger are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-certmonger |
Chapter 6. Using the management API | Chapter 6. Using the management API AMQ Broker has an extensive management API, which you can use to modify a broker's configuration, create new resources (for example, addresses and queues), inspect these resources (for example, how many messages are currently held in a queue), and interact with them (for example, to remove messages from a queue). In addition, clients can use the management API to manage the broker and subscribe to management notifications. 6.1. Methods for managing AMQ Broker using the management API There are two ways to use the management API to manage the broker: Using JMX - JMX is the standard way to manage Java applications Using the JMS API - management operations are sent to the broker using JMS messages and the Red Hat build of Apache Qpid JMS client Although there are two different ways to manage the broker, each API supports the same functionality. If it is possible to manage a resource using JMX it is also possible to achieve the same result by using JMS messages and the Red Hat build of Apache Qpid JMS client. This choice depends on your particular requirements, application settings, and environment. Regardless of the way you invoke management operations, the management API is the same. For each managed resource, there exists a Java interface describing what can be invoked for this type of resource. The broker exposes its managed resources in the org.apache.activemq.artemis.api.core.management package. The way to invoke management operations depends on whether JMX messages or JMS messages and the Red Hat build of Apache Qpid JMS client are used. Note Some management operations require a filter parameter to choose which messages are affected by the operation. Passing null or an empty string means that the management operation will be performed on all messages . 6.2. Managing AMQ Broker using JMX You can use Java Management Extensions (JMX) to manage a broker. The management API is exposed by the broker using MBeans interfaces. The broker registers its resources with the domain org.apache.activemq . For example, the ObjectName to manage a queue named exampleQueue is: org.apache.activemq.artemis:broker="__BROKER_NAME__",component=addresses,address="exampleQueue",subcomponent=queues,routingtype="anycast",queue="exampleQueue" The MBean is: org.apache.activemq.artemis.api.management.QueueControl The MBean's ObjectName is built using the helper class org.apache.activemq.artemis.api.core.management.ObjectNameBuilder . You can also use jconsole to find the ObjectName of the MBeans you want to manage. Managing the broker using JMX is identical to management of any Java applications using JMX. It can be done by reflection or by creating proxies of the MBeans. 6.2.1. Configuring JMX management By default, JMX is enabled to manage the broker. You can enable or disable JMX management by setting the jmx-management-enabled property in the broker.xml configuration file. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Set <jmx-management-enabled> . <jmx-management-enabled>true</jmx-management-enabled> If JMX is enabled, the broker can be managed locally using jconsole . Note Remote connections to JMX are not enabled by default for security reasons. If you want to manage multiple brokers from the same MBeanServer , configure the JMX domain for each of the brokers. By default, the broker uses the JMX domain org.apache.activemq.artemis . <jmx-domain>my.org.apache.activemq</jmx-domain> Note If you are using AMQ Broker on a Windows system, system properties must be set in artemis , or artemis.cmd . A shell script is located under <install_dir> /bin . Additional resources For more information on configuring the broker for remote management, see Oracle's Java Management Guide . 6.2.2. Configuring JMX management access By default, remote JMX access to a broker is disabled for security reasons. However, AMQ Broker has a JMX agent that allows remote access to JMX MBeans. You enable JMX access by configuring a connector element in the broker management.xml configuration file. Note While it is also possible to enable JMX access using the `com.sun.management.jmxremote ` JVM system property, that method is not supported and is not secure. Modifying that JVM system property can bypass RBAC on the broker. To minimize security risks, consider limited access to localhost. Important Exposing the JMX agent of a broker for remote management has security implications. To secure your configuration as described in this procedure: Use SSL for all connections. Explicitly define the connector host, that is, the host and port to expose the agent on. Explicitly define the port that the RMI (Remote Method Invocation) registry binds to. Prerequisites A working broker instance The Java jconsole utility Procedure Open the <broker-instance-dir> /etc/management.xml configuration file. Define a connector for the JMX agent. The connector-port setting establishes an RMI registry that clients such as jconsole query for the JMX connector server. For example, to allow remote access on port 1099: <connector connector-port="1099"/> Verify the connection to the JMX agent using jconsole : Define additional properties on the connector, as described below. connector-host The broker server host to expose the agent on. To prevent remote access, set connector-host to 127.0.0.1 (localhost). rmi-registry-port The port that the JMX RMI connector server binds to. If not set, the port is always random. Set this property to avoid problems with remote JMX connections tunnelled through a firewall. jmx-realm JMX realm to use for authentication. The default value is activemq to match the JAAS configuration. object-name Object name to expose the remote connector on. The default value is connector:name=rmi . secured Specify whether the connector is secured using SSL. The default value is false . Set the value to true to ensure secure communication. key-store-path Location of the keystore. Required if you have set secured="true" . key-store-password Keystore password. Required if you have set secured="true" . The password can be encrypted. key-store-provider Keystore provider. Required if you have set secured="true" . The default value is JKS . trust-store-path Location of the truststore. Required if you have set secured="true" . trust-store-password Truststore password. Required if you have set secured="true" . The password can be encrypted. trust-store-provider Truststore provider. Required if you have set secured="true" . The default value is JKS password-codec The fully qualified class name of the password codec to use. See the password masking documentation, linked below, for more details on how this works. Note The RMI registry picks an IP address to bind to. If you have multiple IP addresses/NICs present on the system, then you can choose the IP address to use by adding the following to the artemis.profile file: -Djava.rmi.server.hostname=localhost Set an appropriate value for the endpoint serialization using jdk.serialFilter as described in the Java Platform documentation . Additional resources For more information about encrypted passwords in configuration files, see Encrypting Passwords in Configuration Files . 6.2.3. MBeanServer configuration When the broker runs in standalone mode, it uses the Java Virtual Machine's Platform MBeanServer to register its MBeans. By default, Jolokia is also deployed to allow access to the MBean server using REST. 6.2.4. How JMX is exposed with Jolokia By default, AMQ Broker ships with the Jolokia HTTP agent deployed as a web application. Jolokia is a remote JMX over HTTP bridge that exposes MBeans. Note To use Jolokia, the user must belong to the role defined by the hawtio.role system property in the <broker_instance_dir> /etc/artemis.profile configuration file. By default, this role is amq . Example 6.1. Using Jolokia to query the broker's version This example uses a Jolokia REST URL to find the version of a broker. The Origin flag should specify the domain name or DNS host name for the broker server. In addition, the value you specify for Origin must correspond to an entry for <allow-origin> in your Jolokia Cross-Origin Resource Sharing (CORS) specification. USD curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\"0.0.0.0\"/Version -H "Origin: mydomain.com" {"request":{"mbean":"org.apache.activemq.artemis:broker=\"0.0.0.0\"","attribute":"Version","type":"read"},"value":"2.4.0.amq-710002-redhat-1","timestamp":1527105236,"status":200} Additional resources For more information on using a JMX-HTTP bridge, see the Jolokia documentation . For more information on assigning a user to a role, see Adding Users . For more information on specifying Jolokia Cross-Origin Resource Sharing (CORS), see section 4.1.5 of link: Security . 6.2.5. Subscribing to JMX management notifications If JMX is enabled in your environment, you can subscribe to management notifications. Procedure Subscribe to ObjectName org.apache.activemq.artemis:broker=" <broker-name> " . Additional resources For more information about management notifications, see Section 6.5, "Management notifications" . 6.3. Managing AMQ Broker using the JMS API The Java Message Service (JMS) API allows you to create, send, receive, and read messages. You can use JMS and the Red Hat build of Apache Qpid JMS client to manage brokers. 6.3.1. Configuring broker management using JMS messages and the Red Hat build of Apache Qpid JMS Client To use JMS to manage a broker, you must first configure the broker's management address with the manage permission. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the <management-address> element, and specify a management address. By default, the management address is activemq.management . You only need to specify a different address if you do not want to use the default. <management-address>my.management.address</management-address> Provide the management address with the manage user permission type. This permission type enables the management address to receive and handle management messages. <security-setting-match="activemq.management"> <permission-type="manage" roles="admin"/> </security-setting> 6.3.2. Managing brokers using the JMS API and Red Hat build of Apache Qpid JMS Client To invoke management operations using JMS messages, the Red Hat build of Apache Qpid JMS client must instantiate the special management queue. Procedure Create a QueueRequestor to send messages to the management address and receive replies. Create a Message . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to fill the message with the management properties. Send the message using the QueueRequestor . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to retrieve the operation result from the management reply. Example 6.2. Viewing the number of messages in a queue This example shows how to use the JMS API to view the number of messages in the JMS queue exampleQueue : Queue managementQueue = ActiveMQJMSClient.createQueue("activemq.management"); QueueSession session = ... QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, "queue.exampleQueue", "messageCount"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println("There are " + count + " messages in exampleQueue"); 6.4. Management operations Whether you are using JMX or JMS messages to manage AMQ Broker, you can use the same API management operations. Using the management API, you can manage brokers, addresses, and queues. 6.4.1. Broker management operations You can use the management API to manage your brokers. Listing, creating, deploying, and destroying queues A list of deployed queues can be retrieved using the getQueueNames() method. Queues can be created or destroyed using the management operations createQueue() , deployQueue() , or destroyQueue() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). createQueue will fail if the queue already exists while deployQueue will do nothing. Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. Listing and closing remote connections Retrieve a client's remote addresses by using listRemoteAddresses() . It is also possible to close the connections associated with a remote address using the closeConnectionsForAddress() method. Alternatively, list connection IDs using listConnectionIDs() and list all the sessions for a given connection ID using listSessions() . Managing transactions In case of a broker crash, when the broker restarts, some transactions might require manual intervention. Use the the following methods to help resolve issues you encounter. List the transactions which are in the prepared states (the transactions are represented as opaque Base64 Strings) using the listPreparedTransactions() method lists. Commit or rollback a given prepared transaction using commitPreparedTransaction() or rollbackPreparedTransaction() to resolve heuristic transactions. List heuristically completed transactions using the listHeuristicCommittedTransactions() and listHeuristicRolledBackTransactions methods. Enabling and resetting message counters Enable and disable message counters using the enableMessageCounters() or disableMessageCounters() method. Reset message counters by using the resetAllMessageCounters() and resetAllMessageCounterHistories() methods. Retrieving broker configuration and attributes The ActiveMQServerControl exposes the broker's configuration through all its attributes (for example, getVersion() method to retrieve the broker's version, and so on). Listing, creating, and destroying Core Bridge and diverts List deployed Core Bridge and diverts using the getBridgeNames() and getDivertNames() methods respectively. Create or destroy using bridges and diverts using createBridge() and destroyBridge() or createDivert() and destroyDivert() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). Stopping the broker and forcing failover to occur with any currently attached clients Use the forceFailover() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Note Because this method actually stops the broker, you will likely receive an error. The exact error depends on the management service you used to call the method. 6.4.2. Address management operations You can use the management API to manage addresses. Manage addresses using the AddressControl class with ObjectName org.apache.activemq.artemis:broker=" <broker-name> ", component=addresses,address=" <address-name> " or the resource name address. <address-name> . Modify roles and permissions for an address using the addRole() or removeRole() methods. You can list all the roles associated with the queue with the getRoles() method. 6.4.3. Queue management operations You can use the management API to manage queues. The core management API deals with queues. The QueueControl class defines the queue management operations (with the ObjectName , org.apache.activemq.artemis:broker=" <broker-name> ",component=addresses,address=" <bound-address> ",subcomponent=queues,routing-type=" <routing-type> ",queue=" <queue-name> " or the resource name queue. <queue-name> ). Most of the management operations on queues take either a single message ID (for example, to remove a single message) or a filter (for example, to expire all messages with a given property). Expiring, sending to a dead letter address, and moving messages Expire messages from a queue using the expireMessages() method. If an expiry address is defined, messages are sent to this address, otherwise they are discarded. You can define the expiry address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Send messages to a dead letter address using the sendMessagesToDeadLetterAddress() method. This method returns the number of messages sent to the dead letter address. If a dead letter address is defined, messages are sent to this address, otherwise they are removed from the queue and discarded. You can define the dead letter address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Move messages from one queue to another using the moveMessages() method. Listing and removing messages List messages from a queue using the listMessages() method. It will return an array of Map , one Map for each message. Remove messages from a queue using the removeMessages() method, which returns a boolean for the single message ID variant or the number of removed messages for the filter variant. This method takes a filter argument to remove only filtered messages. Setting the filter to an empty string will in effect remove all messages. Counting messages The number of messages in a queue is returned by the getMessageCount() method. Alternatively, the countMessages() will return the number of messages in the queue which match a given filter. Changing message priority The message priority can be changed by using the changeMessagesPriority() method which returns a boolean for the single message ID variant or the number of updated messages for the filter variant. Message counters Message counters can be listed for a queue with the listMessageCounter() and listMessageCounterHistory() methods (see Section 6.6, "Using message counters" ). The message counters can also be reset for a single queue using the resetMessageCounter() method. Retrieving the queue attributes The QueueControl exposes queue settings through its attributes (for example, getFilter() to retrieve the queue's filter if it was created with one, isDurable() to know whether the queue is durable, and so on). Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. 6.4.4. Remote resource management operations You can use the management API to start and stop a broker's remote resources (acceptors, diverts, bridges, and so on) so that the broker can be taken offline for a given period of time without stopping completely. Acceptors Start or stop an acceptor using the start() or. stop() method on the AcceptorControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=acceptors,name=" <acceptor-name> " or the resource name acceptor. <address-name> ). Acceptor parameters can be retrieved using the AcceptorControl attributes. See Network Connections: Acceptors and Connectors for more information about Acceptors. Diverts Start or stop a divert using the start() or stop() method on the DivertControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=diverts,name=" <divert-name> " or the resource name divert. <divert-name> ). Divert parameters can be retrieved using the DivertControl attributes. Bridges Start or stop a bridge using the start() (resp. stop() ) method on the BridgeControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=bridge,name=" <bridge-name> " or the resource name bridge. <bridge-name> ). Bridge parameters can be retrieved using the BridgeControl attributes. Broadcast groups Start or stop a broadcast group using the start() or stop() method on the BroadcastGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=broadcast-group,name=" <broadcast-group-name> " or the resource name broadcastgroup. <broadcast-group-name> ). Broadcast group parameters can be retrieved using the BroadcastGroupControl attributes. See Broker discovery methods for more information. Discovery groups Start or stop a discovery group using the start() or stop() method on the DiscoveryGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=discovery-group,name=" <discovery-group-name> " or the resource name discovery. <discovery-group-name> ). Discovery groups parameters can be retrieved using the DiscoveryGroupControl attributes. See Broker discovery methods for more information. Cluster connections Start or stop a cluster connection using the start() or stop() method on the ClusterConnectionControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=cluster-connection,name=" <cluster-connection-name> " or the resource name clusterconnection. <cluster-connection-name> ). Cluster connection parameters can be retrieved using the ClusterConnectionControl attributes. See Creating a broker cluster for more information. 6.5. Management notifications Below is a list of all the different kinds of notifications as well as which headers are on the messages. Every notification has a _AMQ_NotifType (value noted in parentheses) and _AMQ_NotifTimestamp header. The time stamp is the unformatted result of a call to java.lang.System.currentTimeMillis() . Table 6.1. Management notifications Notification type Headers BINDING_ADDED (0) _AMQ_Binding_Type _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString BINDING_REMOVED (1) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString CONSUMER_CREATED (2) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString CONSUMER_CLOSED (3) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString SECURITY_AUTHENTICATION_VIOLATION (6) _AMQ_User SECURITY_PERMISSION_VIOLATION (7) _AMQ_Address _AMQ_CheckType _AMQ_User DISCOVERY_GROUP_STARTED (8) name DISCOVERY_GROUP_STOPPED (9) name BROADCAST_GROUP_STARTED (10) name BROADCAST_GROUP_STOPPED (11) name BRIDGE_STARTED (12) name BRIDGE_STOPPED (13) name CLUSTER_CONNECTION_STARTED (14) name CLUSTER_CONNECTION_STOPPED (15) name ACCEPTOR_STARTED (16) factory id ACCEPTOR_STOPPED (17) factory id PROPOSAL (18) _JBM_ProposalGroupId _JBM_ProposalValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance PROPOSAL_RESPONSE (19) _JBM_ProposalGroupId _JBM_ProposalValue _JBM_ProposalAltValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance CONSUMER_SLOW (21) _AMQ_Address _AMQ_ConsumerCount _AMQ_RemoteAddress _AMQ_ConnectionName _AMQ_ConsumerName _AMQ_SessionName 6.6. Using message counters You use message counters to obtain information about queues over time. This helps you to identify trends that would otherwise be difficult to see. For example, you could use message counters to determine how a particular queue is being used over time. You could also attempt to obtain this information by using the management API to query the number of messages in the queue at regular intervals, but this would not show how the queue is actually being used. The number of messages in a queue can remain constant because no clients are sending or receiving messages on it, or because the number of messages sent to the queue is equal to the number of messages consumed from it. In both of these cases, the number of messages in the queue remains the same even though it is being used in very different ways. 6.6.1. Types of message counters Message counters provide additional information about queues on a broker. count The total number of messages added to the queue since the broker was started. countDelta The number of messages added to the queue since the last message counter update. lastAckTimestamp The time stamp of the last time a message from the queue was acknowledged. lastAddTimestamp The time stamp of the last time a message was added to the queue. messageCount The current number of messages in the queue. messageCountDelta The overall number of messages added/removed from the queue since the last message counter update. For example, if messageCountDelta is -10 , then 10 messages overall have been removed from the queue. udpateTimestamp The time stamp of the last message counter update. Note You can combine message counters to determine other meaningful data as well. For example, to know specifically how many messages were consumed from the queue since the last update, you would subtract the messageCountDelta from countDelta . 6.6.2. Enabling message counters Message counters can have a small impact on the broker's memory; therefore, they are disabled by default. To use message counters, you must first enable them. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Enable message counters. <message-counter-enabled>true</message-counter-enabled> Set the message counter history and sampling period. <message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period> message-counter-max-day-history The number of days the broker should store queue metrics. The default is 10 days. message-counter-sample-period How often (in milliseconds) the broker should sample its queues to collect metrics. The default is 10000 milliseconds. 6.6.3. Retrieving message counters You can use the management API to retrieve message counters. Prerequisites Message counters must be enabled on the broker. For more information, see Section 6.6.2, "Enabling message counters" . Procedure Use the management API to retrieve message counters. // Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = ... JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format("%s message(s) in the queue (since last sample: %s)\n", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta()); Additional resources For more information about message counters, see Section 6.4.3, "Queue management operations" . | [
"org.apache.activemq.artemis:broker=\"__BROKER_NAME__\",component=addresses,address=\"exampleQueue\",subcomponent=queues,routingtype=\"anycast\",queue=\"exampleQueue\"",
"org.apache.activemq.artemis.api.management.QueueControl",
"<jmx-management-enabled>true</jmx-management-enabled>",
"<jmx-domain>my.org.apache.activemq</jmx-domain>",
"<connector connector-port=\"1099\"/>",
"service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi",
"curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"/Version -H \"Origin: mydomain.com\" {\"request\":{\"mbean\":\"org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"\",\"attribute\":\"Version\",\"type\":\"read\"},\"value\":\"2.4.0.amq-710002-redhat-1\",\"timestamp\":1527105236,\"status\":200}",
"<management-address>my.management.address</management-address>",
"<security-setting-match=\"activemq.management\"> <permission-type=\"manage\" roles=\"admin\"/> </security-setting>",
"Queue managementQueue = ActiveMQJMSClient.createQueue(\"activemq.management\"); QueueSession session = QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, \"queue.exampleQueue\", \"messageCount\"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println(\"There are \" + count + \" messages in exampleQueue\");",
"<message-counter-enabled>true</message-counter-enabled>",
"<message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period>",
"// Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format(\"%s message(s) in the queue (since last sample: %s)\\n\", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta());"
]
| https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/managing_amq_broker/management-api-managing |
18.6. The Default Configuration | 18.6. The Default Configuration When the libvirtd daemon ( libvirtd ) is first installed, it contains an initial virtual network switch configuration in NAT mode. This configuration is used so that installed guests can communicate to the external network, through the host physical machine. The following image demonstrates this default configuration for libvirtd : Figure 18.7. Default libvirt network configuration Note A virtual network can be restricted to a specific physical interface. This may be useful on a physical system that has several interfaces (for example, eth0 , eth1 and eth2 ). This is only useful in routed and NAT modes, and can be defined in the dev=<interface> option, or in virt-manager when creating a new virtual network. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-the-default_configuration-libvirt |
Chapter 5. Kafka Bridge interface | Chapter 5. Kafka Bridge interface The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a HTTP API connection to Streams for Apache Kafka for clients to produce and consume messages without the requirement to use the native Kafka protocol. The API has two main resources - consumers and topics - that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. 5.1. HTTP requests The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to perform operations such as the following: Send messages to a topic. Retrieve messages from topics. Retrieve a list of partitions for a topic. Create and delete consumers. Subscribe consumers to topics, so that they start receiving messages from those topics. Retrieve a list of topics that a consumer is subscribed to. Unsubscribe consumers from topics. Assign partitions to consumers. Commit a list of consumer offsets. Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position. The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats. Additional resources To view the API documentation, including example requests and responses, see Using the Kafka Bridge . 5.2. Supported clients for the Kafka Bridge You can use the Kafka Bridge to integrate both internal and external HTTP client applications with your Kafka cluster. Internal clients Internal clients are container-based HTTP clients running in the same OpenShift cluster as the Kafka Bridge itself. Internal clients can access the Kafka Bridge on the host and port defined in the KafkaBridge custom resource. External clients External clients are HTTP clients running outside the OpenShift cluster in which the Kafka Bridge is deployed and running. External clients can access the Kafka Bridge through an OpenShift Route, a loadbalancer service, or using an Ingress. HTTP internal and external client integration | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/overview-components-kafka-bridge_str |
Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1] | Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1] Description ConsoleExternalLogLink is an extension for customizing OpenShift web console log links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. 3.1.1. .spec Description ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. Type object Required hrefTemplate text Property Type Description hrefTemplate string hrefTemplate is an absolute secure URL (must use https) for the log link including variables to be replaced. Variables are specified in the URL with the format USD{variableName}, for instance, USD{containerName} and will be replaced with the corresponding values from the resource. Resource is a pod. Supported variables are: - USD{resourceName} - name of the resource which containes the logs - USD{resourceUID} - UID of the resource which contains the logs - e.g. 11111111-2222-3333-4444-555555555555 - USD{containerName} - name of the resource's container that contains the logs - USD{resourceNamespace} - namespace of the resource that contains the logs - USD{resourceNamespaceUID} - namespace UID of the resource that contains the logs - USD{podLabels} - JSON representation of labels matching the pod with the logs - e.g. {"key1":"value1","key2":"value2"} e.g., https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} namespaceFilter string namespaceFilter is a regular expression used to restrict a log link to a matching set of namespaces (e.g., ^openshift- ). The string is converted into a regular expression using the JavaScript RegExp constructor. If not specified, links will be displayed for all the namespaces. text string text is the display text for the link 3.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleexternalloglinks DELETE : delete collection of ConsoleExternalLogLink GET : list objects of kind ConsoleExternalLogLink POST : create a ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name} DELETE : delete a ConsoleExternalLogLink GET : read the specified ConsoleExternalLogLink PATCH : partially update the specified ConsoleExternalLogLink PUT : replace the specified ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status GET : read status of the specified ConsoleExternalLogLink PATCH : partially update status of the specified ConsoleExternalLogLink PUT : replace status of the specified ConsoleExternalLogLink 3.2.1. /apis/console.openshift.io/v1/consoleexternalloglinks HTTP method DELETE Description delete collection of ConsoleExternalLogLink Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleExternalLogLink Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleExternalLogLink Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 202 - Accepted ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.2. /apis/console.openshift.io/v1/consoleexternalloglinks/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink HTTP method DELETE Description delete a ConsoleExternalLogLink Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleExternalLogLink Table 3.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleExternalLogLink Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleExternalLogLink Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.3. /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink HTTP method GET Description read status of the specified ConsoleExternalLogLink Table 3.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleExternalLogLink Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleExternalLogLink Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/console_apis/consoleexternalloglink-console-openshift-io-v1 |
Chapter 19. Using the Eclipse IDE for C and C++ Application Development | Chapter 19. Using the Eclipse IDE for C and C++ Application Development Some developers prefer using an IDE instead of an array of command line tools. Red Hat makes available the Eclipse IDE with support for development of C and C++ applications. Using Eclipse to Develop C and C++ Applications A detailed description of the Eclipse IDE and its use for developing C and C++ applications is out of the scope of this document. Please refer to the resources linked below. Additional Resources Using Eclipse Eclipse documentation - C/C++ Development User Guide | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/creating_c_cpp_applications_using-eclipse-c-cpp |
14.2. Adding External Providers | 14.2. Adding External Providers 14.2.1. Adding a Red Hat Satellite Instance for Host Provisioning Add a Satellite instance for host provisioning to the Red Hat Virtualization Manager. Red Hat Virtualization 4.2 is supported with Red Hat Satellite 6.1. Adding a Satellite Instance for Host Provisioning Click Administration Providers . Click Add . Enter a Name and Description . Select Foreman/Satellite from the Type drop-down list. Enter the URL or fully qualified domain name of the machine on which the Satellite instance is installed in the Provider URL text field. You do not need to specify a port number. Important IP addresses cannot be used to add a Satellite instance. Select the Requires Authentication check box. Enter the Username and Password for the Satellite instance. You must use the same user name and password as you would use to log in to the Satellite provisioning portal. Test the credentials: Click Test to test whether you can authenticate successfully with the Satellite instance using the provided credentials. If the Satellite instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the Satellite instance provides to ensure the Manager can communicate with the instance. Click OK . 14.2.2. Adding an OpenStack Image (Glance) Instance for Image Management Add an OpenStack Image (Glance) instance for image management to the Red Hat Virtualization Manager. Adding an OpenStack Image (Glance) Instance for Image Management Click Administration Providers . Click Add and enter the details in the General Settings tab. For more information on these fields, see Section 14.2.10, "Add Provider General Settings Explained" . Enter a Name and Description . Select OpenStack Image from the Type drop-down list. Enter the URL or fully qualified domain name of the machine on which the OpenStack Image instance is installed in the Provider URL text field. Optionally, select the Requires Authentication check box and enter the Username and Password for the OpenStack Image instance user registered in Keystone. You must also define the authentication URL of the Keystone server by defining the Protocol (must be HTTP ), Hostname , and API Port. Enter the Tenant for the OpenStack Image instance. Test the credentials: Click Test to test whether you can authenticate successfully with the OpenStack Image instance using the provided credentials. If the OpenStack Image instance uses SSL, the Import provider certificates window opens. Click OK to import the certificate that the OpenStack Image instance provides to ensure the Manager can communicate with the instance. Click OK . 14.2.3. Adding an OpenStack Networking (Neutron) Instance for Network Provisioning Add an OpenStack Networking (neutron) instance for network provisioning to the Red Hat Virtualization Manager. To add other third-party network providers that implement the OpenStack Neutron REST API, see Section 14.2.9, "Adding an External Network Provider" . Important Red Hat Virtualization supports Red Hat OpenStack Platform versions 10, 13, and 14 as external network providers. OpenStack 10 should be deployed with an OVS driver. OpenStack 13 should be deployed with an OVS, OVN, or ODL driver. OpenStack 14 should be deployed with an OVN or ODL driver. To use neutron networks, hosts must have the neutron agents configured. You can configure the agents manually, or use the Red Hat OpenStack Platform director to deploy the Networker role, before adding the network node to the Manager as a host. Using the director is recommended. Automatic deployment of the neutron agents through the Network Provider tab in the New Host window is not supported. Although network nodes and regular hosts can be used in the same cluster, virtual machines using neutron networks can only run on network nodes. Adding a Network Node as a Host Use the Red Hat OpenStack Platform director to deploy the Networker role on the network node. See Creating a New Role and Networker in the Red Hat OpenStack Platform Advanced Overcloud Customization Guide . Enable the required repositories: Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and record the pool IDs: Use the pool IDs to attach the subscriptions to the system: Configure the repositories: Ensure that all packages currently installed are up to date: Reboot the machine if any kernel packages were updated. Install the Openstack Networking hook: Add the network node to the Manager as a host. See Section 10.5.1, "Adding Standard Hosts to the Red Hat Virtualization Manager" . Important Do not select the OpenStack Networking provider from the Network Provider tab. This is currently not supported. Adding an OpenStack Networking (Neutron) Instance for Network Provisioning Click Administration Providers . Click Add and enter the details in the General Settings tab. For more information on these fields, see Section 14.2.10, "Add Provider General Settings Explained" . Enter a Name and Description . Select OpenStack Networking from the Type drop-down list. Ensure that Open vSwitch is selected in the Networking Plugin field. Optionally, select the Automatic Synchronization check box. This enables automatic synchronization of the external network provider with existing networks. Enter the URL or fully qualified domain name of the machine on which the OpenStack Networking instance is installed in the Provider URL text field, followed by the port number. The Read-Only check box is selected by default. This prevents users from modifying the OpenStack Networking instance. Important You must leave the Read-Only check box selected for your setup to be supported by Red Hat. Optionally, select the Requires Authentication check box and enter the Username and Password for the OpenStack Networking user registered in Keystone. You must also define the authentication URL of the Keystone server by defining the Protocol, Hostname, API Port, and API Version . For API version 2.0, enter the Tenant for the OpenStack Networking instance. For API version 3, enter the User Domain Name , Project Name , and Project Domain Name . Test the credentials: Click Test to test whether you can authenticate successfully with the OpenStack Networking instance using the provided credentials. If the OpenStack Networking instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the OpenStack Networking instance provides to ensure the Manager can communicate with the instance. Click the Agent Configuration tab. Warning The following steps are provided only as a Technology Preview. Red Hat Virtualization only supports preconfigured neutron hosts. Enter a comma-separated list of interface mappings for the Open vSwitch agent in the Interface Mappings field. Select the message broker type that the OpenStack Networking instance uses from the Broker Type list. Enter the URL or fully qualified domain name of the host on which the message broker is hosted in the Host field. Enter the Port by which to connect to the message broker. This port number will be 5762 by default if the message broker is not configured to use SSL, and 5761 if it is configured to use SSL. Enter the Username and Password of the OpenStack Networking user registered in the message broker instance. Click OK . You have added the OpenStack Networking instance to the Red Hat Virtualization Manager. Before you can use the networks it provides, import the networks into the Manager. See Section 9.3.1, "Importing Networks From External Providers" . 14.2.4. Adding an OpenStack Block Storage (Cinder) Instance for Storage Management Important Using an OpenStack Block Storage (Cinder) instance for storage management is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . Add an OpenStack Block Storage (Cinder) instance for storage management to the Red Hat Virtualization Manager. The OpenStack Cinder volumes are provisioned by Ceph Storage. Adding an OpenStack Block Storage (Cinder) Instance for Storage Management Click Administration Providers . Click Add and enter the details in the General Settings tab. For more information on these fields, see Section 14.2.10, "Add Provider General Settings Explained" . Enter a Name and Description . Select OpenStack Block Storage from the Type drop-down list. Select the Data Center to which OpenStack Block Storage volumes will be attached. Enter the URL or fully qualified domain name of the machine on which the OpenStack Block Storage instance is installed, followed by the port number, in the Provider URL text field. Optionally, select the Requires Authentication check box and enter the Username and Password for the OpenStack Block Storage instance user registered in Keystone. Define the authentication URL of the Keystone server by defining the Protocol (must be HTTP ), Hostname , and API Port . Enter the Tenant for the OpenStack Block Storage instance. Click Test to test whether you can authenticate successfully with the OpenStack Block Storage instance using the provided credentials. Click OK . If client Ceph authentication ( cephx ) is enabled, you must also complete the following steps. The cephx protocol is enabled by default. On your Ceph server, create a new secret key for the client.cinder user using the ceph auth get-or-create command. See Cephx Configuration Reference for more information on cephx , and Managing Users for more information on creating keys for new users. If a key already exists for the client.cinder user, retrieve it using the same command. In the Administration Portal, select the newly created Cinder external provider from the Providers list. Click the Authentication Keys tab. Click New . Enter the secret key in the Value field. Copy the automatically generated UUID , or enter an existing UUID in the text field. On your Cinder server, add the UUID from the step and the cinder user to /etc/cinder/cinder.conf : See Section 13.6.1, "Creating a Virtual Disk" for more information about creating a OpenStack Block Storage (Cinder) disk. 14.2.5. Adding a VMware Instance as a Virtual Machine Provider Add a VMware vCenter instance to import virtual machines from VMware to the Red Hat Virtualization Manager. Red Hat Virtualization uses V2V to convert VMware virtual machines to the correct format before they are imported. The virt-v2v package must be installed on at least one host. The virt-v2v package is available by default on Red Hat Virtualization Hosts (RHVH) and is installed on Red Hat Enterprise Linux hosts as a dependency of VDSM when added to the Red Hat Virtualization environment. Red Hat Enterprise Linux hosts must be Red Hat Enterprise Linux 7.2 or later. Note The virt-v2v package is not available on ppc64le architecture; these hosts cannot be used as proxy hosts. Adding a VMware vCenter Instance as a Virtual Machine Provider Click Administration Providers . Click Add . Enter a Name and Description . Select VMware from the Type drop-down list. Select the Data Center into which VMware virtual machines will be imported, or select Any Data Center to instead specify the destination data center during individual import operations. Enter the IP address or fully qualified domain name of the VMware vCenter instance in the vCenter field. Enter the IP address or fully qualified domain name of the host from which the virtual machines will be imported in the ESXi field. Enter the name of the data center in which the specified ESXi host resides in the Data Center field. If you have exchanged the SSL certificate between the ESXi host and the Manager, leave the Verify server's SSL certificate check box selected to verify the ESXi host's certificate. If not, clear the check box. Select a host in the chosen data center with virt-v2v installed to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the VMware vCenter external provider. If you selected Any Data Center above, you cannot choose the host here, but instead can specify a host during individual import operations. Enter the Username and Password for the VMware vCenter instance. The user must have access to the VMware data center and ESXi host on which the virtual machines reside. Test the credentials: Click Test to test whether you can authenticate successfully with the VMware vCenter instance using the provided credentials. If the VMware vCenter instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the VMware vCenter instance provides to ensure the Manager can communicate with the instance. Click OK . To import virtual machines from the VMware external provider, see Importing a Virtual Machine from a VMware Provider in the Virtual Machine Management Guide . 14.2.6. Adding a RHEL 5 Xen Host as a Virtual Machine Provider Add a RHEL 5 Xen host to import virtual machines from Xen to Red Hat Virtualization. Red Hat Virtualization uses V2V to convert RHEL 5 Xen virtual machines to the correct format before they are imported. The virt-v2v package must be installed on at least one host. The virt-v2v package is available by default on Red Hat Virtualization Hosts (RHVH) and is installed on Red Hat Enterprise Linux hosts as a dependency of VDSM when added to the Red Hat Virtualization environment. Red Hat Enterprise Linux hosts must be Red Hat Enterprise Linux 7.2 or later. Note The virt-v2v package is not available on ppc64le architecture; these hosts cannot be used as proxy hosts. Adding a RHEL 5 Xen Instance as a Virtual Machine Provider Enable public key authentication between the proxy host and the RHEL 5 Xen host: Log in to the proxy host and generate SSH keys for the vdsm user. Copy the vdsm user's public key to the RHEL 5 Xen host. The proxy host's known_hosts file will also be updated to include the host key of the RHEL 5 Xen host. Log in to the RHEL 5 Xen host to verify that the login works correctly. Click Administration Providers . Click Add . Enter a Name and Description . Select XEN from the Type drop-down list. Select the Data Center into which Xen virtual machines will be imported, or select Any Data Center to specify the destination data center during individual import operations. Enter the URI of the RHEL 5 Xen host in the URI field. Select a host in the chosen data center with virt-v2v installed to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the RHEL 5 Xen external provider. If you selected Any Data Center above, you cannot choose the host here, but instead can specify a host during individual import operations. Click Test to test whether you can authenticate successfully with the RHEL 5 Xen host. Click OK . To import virtual machines from a RHEL 5 Xen external provider, see Importing a Virtual Machine from a RHEL 5 Xen Host in the Virtual Machine Management Guide . 14.2.7. Adding a KVM Host as a Virtual Machine Provider Add a KVM host to import virtual machines from KVM to Red Hat Virtualization Manager. Adding a KVM Host as a Virtual Machine Provider Enable public key authentication between the proxy host and the KVM host: Log in to the proxy host and generate SSH keys for the vdsm user. Copy the vdsm user's public key to the KVM host. The proxy host's known_hosts file will also be updated to include the host key of the KVM host. Log in to the KVM host to verify that the login works correctly. Click Administration Providers . Click Add . Enter a Name and Description . Select KVM from the Type drop-down list. Select the Data Center into which KVM virtual machines will be imported, or select Any Data Center to specify the destination data center during individual import operations. Enter the URI of the KVM host in the URI field. Select a host in the chosen data center to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the KVM external provider. If you selected Any Data Center in the Data Center field above, you cannot choose the host here. The field is greyed out and shows Any Host in Data Center . Instead you can specify a host during individual import operations. Optionally, select the Requires Authentication check box and enter the Username and Password for the KVM host. The user must have access to the KVM host on which the virtual machines reside. Click Test to test whether you can authenticate successfully with the KVM host using the provided credentials. Click OK . To import virtual machines from a KVM external provider, see Importing a Virtual Machine from a KVM Host in the Virtual Machine Management Guide . 14.2.8. Adding Open Virtual Network (OVN) as an External Network Provider Open Virtual Network (OVN) enables you to create networks without adding VLANs or changing the infrastructure. OVN is an Open vSwitch (OVS) extension that enables support for virtual networks by adding native OVS support for virtual L2 and L3 overlays. You can either install a new OVN network provider or add an existing one . You can also connect an OVN network to a native Red Hat Virtualization network. See Section 14.2.8.5, "Connecting an OVN Network to a Physical Network" for more information. This feature is available as a Technology Preview only. A Neutron-like REST API is exposed by ovirt-provider-ovn , enabling you to create networks, subnets, ports, and routers (see the OpenStack Networking API v2.0 for details). These overlay networks enable communication among the virtual machines. Note OVN is supported as an external provider by CloudForms, using the OpenStack (Neutron) API. See Network Managers in Red Hat CloudForms: Managing Providers for details. For more information on OVS and OVN, see the OVS documentation at http://docs.openvswitch.org/en/latest/ and http://openvswitch.org/support/dist-docs/ . 14.2.8.1. Installing a New OVN Network Provider Warning If the openvswitch package is already installed and if the version is 1:2.6.1 (version 2.6.1, epoch 1), the OVN installation will fail when it tries to install the latest openvswitch package. See the Doc Text in BZ#1505398 for the details and a workaround. When you install OVN using engine-setup , the following steps are automated: Setting up an OVN central server on the Manager machine. Adding OVN to Red Hat Virtualization as an external network provider. Setting the Default cluster's default network provider to ovirt-provider-ovn . Configuring hosts to communicate with OVN when added to the cluster. If you use a preconfigured answer file with engine-setup , you can add the following entry to install OVN: Installing a New OVN Network Provider Install OVN on the Manager using engine-setup. During the installation, engine-setup asks the following questions: If Yes , engine-setup installs ovirt-provider-ovn . If engine-setup is updating a system, this prompt only appears if ovirt-provider-ovn has not been installed previously. If No , you will not be asked again on the run of engine-setup . If you want to see this option, run engine-setup --reconfigure-optional-components . If Yes , engine-setup uses the default engine user and password specified earlier in the setup process. This option is only available during new installations. You can use the default values or specify the oVirt OVN provider user and password. Note To change the authentication method later, you can edit the /etc/ovirt-provider-ovn/conf.d/10_engine_setup.conf file, or create a new /etc/ovirt-provider-ovn/conf.d/20_engine_setup.conf file. Restart the ovirt-provider-ovn service for the change to take effect. See https://github.com/oVirt/ovirt-provider-ovn/blob/master/README.adoc for more information about OVN authentication. Add hosts to the Default cluster. Hosts added to this cluster are automatically configured to communicate with OVN. To add new hosts, see Section 10.5.1, "Adding Standard Hosts to the Red Hat Virtualization Manager" . To configure your hosts to use an existing, non-default network, see Section 14.2.8.4, "Configuring Hosts for an OVN Tunnel Network" . Add networks to the Default cluster; see Section 9.1.2, "Creating a New Logical Network in a Data Center or Cluster" and select the Create on external provider check box. ovirt-provider-ovn is selected by default. To connect the OVN network to a native Red Hat Virtualization network, select the Connect to physical network check box and specify the Red Hat Virtualization network to use. See Section 14.2.8.5, "Connecting an OVN Network to a Physical Network" for more information and prerequisites. Define whether the network should use Security Groups from the Security Groups drop-down. For more information on the available options see Section 9.1.7, "Logical Network General Settings Explained" . You can now create virtual machines that use OVN networks. 14.2.8.2. Adding an Existing OVN Network Provider Adding an existing OVN central server as an external network provider in Red Hat Virtualization involves the following key steps: Install the OVN provider, a proxy used by the Manager to interact with OVN. The OVN provider can be installed on any machine, but must be able to communicate with the OVN central server and the Manager. Add the OVN provider to Red Hat Virtualization as an external network provider. Create a new cluster that uses OVN as its default network provider. Hosts added to this cluster are automatically configured to communicate with OVN. Prerequisites The following packages are required by the OVN provider and must be available on the provider machine: openvswitch-ovn-central openvswitch openvswitch-ovn-common python-openvswitch If these packages are not available from the repositories already enabled on the provider machine, they can be downloaded from the OVS website: http://openvswitch.org/download/ . Adding an Existing OVN Network Provider Install and configure the OVN provider. Install the provider on the provider machine: If you are not installing the provider on the same machine as the Manager, add the following entry to the /etc/ovirt-provider-ovn/conf.d/10_engine_setup.conf file (create this file if it does not already exist): This is used for authentication, if authentication is enabled. If you are not installing the provider on the same machine as the OVN central server, add the following entry to the /etc/ovirt-provider-ovn/conf.d/10_engine_setup.conf file (create this file if it does not already exist): Open ports 9696, 6641, and 6642 in the firewall to allow communication between the OVN provider, the OVN central server, and the Manager. This can be done either manually or by adding the ovirt-provider-ovn and ovirt-provider-ovn-central services to the appropriate zone: Start and enable the service: Configure the OVN central server to listen to requests from ports 6642 and 6641: In the Administration Portal, click Administration Providers . Click Add and enter the details in the General Settings tab. For more information on these fields, see Section 14.2.10, "Add Provider General Settings Explained" . Enter a Name and Description . From the Type list, select External Network Provider . Click the Networking Plugin text box and select oVirt Network Provider for OVN from the drop-down menu. Optionally, select the Automatic Synchronization check box. This enables automatic synchronization of the external network provider with existing networks. Note Automatic synchronization is enabled by default on the ovirt-provider-ovn network provider created by the engine-setup tool. Enter the URL or fully qualified domain name of the OVN provider in the Provider URL text field, followed by the port number. If the OVN provider and the OVN central server are on separate machines, this is the URL of the provider machine, not the central server. If the OVN provider is on the same machine as the Manager, the URL can remain the default http://localhost:9696 . Clear the Read-Only check box to allow creating new OVN networks from the Red Hat Virtualization Manager. Optionally, select the Requires Authentication check box and enter the Username and Password for the for the external network provider user registered in Keystone. You must also define the authentication URL of the Keystone server by defining the Protocol , Hostname , and API Port . Optionally, enter the Tenant for the external network provider. The authentication method must be configured in the /etc/ovirt-provider-ovn/conf.d/10_engine_setup.conf file (create this file if it does not already exist). Restart the ovirt-provider-ovn service for the change to take effect. See https://github.com/oVirt/ovirt-provider-ovn/blob/master/README.adoc for more information about OVN authentication. Test the credentials: Click Test to test whether you can authenticate successfully with OVN using the provided credentials. If the OVN instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the OVN instance provides to ensure the Manager can communicate with the instance. Click OK . Create a new cluster that uses OVN as its default network provider. See Section 8.2.1, "Creating a New Cluster" and select the OVN network provider from the Default Network Provider drop-down list. Add hosts to the cluster. Hosts added to this cluster are automatically configured to communicate with OVN. To add new hosts, see Section 10.5.1, "Adding Standard Hosts to the Red Hat Virtualization Manager" . Import or add OVN networks to the new cluster. To import networks, see Importing Networks . To create new networks using OVN, see Creating a new logical network in a data center or cluster , and select the Create on external provider check box. ovirt-provider-ovn is selected by default. To configure your hosts to use an existing, non-default network, see Section 14.2.8.4, "Configuring Hosts for an OVN Tunnel Network" . To connect the OVN network to a native Red Hat Virtualization network, select the Connect to physical network check box and specify the Red Hat Virtualization network to use. See Section 14.2.8.5, "Connecting an OVN Network to a Physical Network" for more information and prerequisites. You can now create virtual machines that use OVN networks. 14.2.8.3. Using an Ansible playbook to modify an OVN tunnel network You can use the ovirt-provider-ovn-driver Ansible playbook to use long names to modify the tunnel network for OVN controllers. Ansible playbook to modify an OVN tunnel network Parameters key-file The key file to log into the host. The default key file is usually found in the /etc/pki/ovirt-engine/keys directory. inventory The oVirt VM inventory. To locate the inventory value, use this script: /usr/share/ovirt-engine-metrics/bin/ovirt-engine-hosts-ansible-inventory . cluster_name The name of the cluster on which to update the name. ovn_central The IP address to the OVN central server. This IP address must be accessible to all hosts. ovirt_network The oVirt network name. ovn_tunneling_interface The VDSM network name. Note The ovirt-provider-ovn-driver Ansible playbook supports using either the ovirt_network parameter or the ovn_tunneling_interface parameter. This playbook fails if both parameters are present in the same playbook. Playbook with ovirt_network parameter Playbook with ovn_tunneling_interface parameter On the Manager machine, navigate to the /usr/share/ovirt-engine/playbooks directory to run the Ansible playbooks. 14.2.8.4. Configuring Hosts for an OVN Tunnel Network You can configure your hosts to use an existing network, other than the default ovirtmgmt network, with the ovirt-provider-ovn-driver Ansible playbook. The network must be accessible to all the hosts in the cluster. Note The ovirt-provider-ovn-driver Ansible playbook updates existing hosts. If you add new hosts to the cluster, you must run the playbook again. Configuring Hosts for an OVN Tunnel Network On the Manager machine, go to the playbooks directory: Run the ansible-playbook command with the following parameters: For example: Note The OVN_Central_IP can be on the new network, but this is not a requirement. The OVN_Central_IP must be accessible to all hosts. The VDSM_Network_Name is limited to 15 characters. If you defined a logical network name that was longer than 15 characters or contained non-ASCII characters, a 15-character name is automatically generated. See Mapping VDSM Names to Logical Network Names for instructions on displaying a mapping of these names. Updating the OVN Tunnel Network on a Single Host You can update the OVN tunnel network on a single host with vdsm-tool : Example 14.1. Updating a Host with vdsm-tool 14.2.8.5. Connecting an OVN Network to a Physical Network Important This feature relies on Open vSwitch support, which is available only as a Technology Preview in Red Hat Virtualization. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . You can create an external provider network that overlays a native Red Hat Virtualization network so that the virtual machines on each appear to be sharing the same subnet. Important If you created a subnet for the OVN network, a virtual machine using that network will receive an IP address from there. If you want the physical network to allocate the IP address, do not create a subnet for the OVN network. Prerequisites The cluster must have OVS selected as the Switch Type . Hosts added to this cluster must not have any pre-existing Red Hat Virtualization networks configured, such as the ovirtmgmt bridge. The physical network must be available on the hosts. You can enforce this by setting the physical network as required for the cluster (in the Manage Networks window, or the Cluster tab of the New Logical Network window). Creating a New External Network Connected to a Physical Network Click Compute Clusters . Click the cluster's name to open the details view. Click the Logical Networks tab and click Add Network . Enter a Name for the network. Select the Create on external provider check box. ovirt-provider-ovn is selected by default. Select the Connect to physical network check box if it is not already selected by default. Choose the physical network to connect the new network to: Click the Data Center Network radio button and select the physical network from the drop-down list. This is the recommended option. Click the Custom radio button and enter the name of the physical network. If the physical network has VLAN tagging enabled, you must also select the Enable VLAN tagging check box and enter the physical network's VLAN tag. Important The physical network's name must not be longer than 15 characters, or contain special characters. Click OK . 14.2.9. Adding an External Network Provider Any network provider that implements the OpenStack Neutron REST API can be added to Red Hat Virtualization. The virtual interface driver needs to be provided by the implementer of the external network provider. A reference implementation of a network provider and a virtual interface driver are available at https://github.com/mmirecki/ovirt-provider-mock and https://github.com/mmirecki/ovirt-provider-mock/blob/master/docs/driver_instalation . Adding an External Network Provider for Network Provisioning Click Administration Providers . Click Add and enter the details in the General Settings tab. For more information on these fields, see Section 14.2.10, "Add Provider General Settings Explained" . Enter a Name and Description . Select External Network Provider from the Type drop-down list. Optionally, click the Networking Plugin text box and select the appropriate driver from the drop-down menu. Optionally, select the Automatic Synchronization check box. This enables automatic synchronization of the external network provider with existing networks. This feature is disabled by default when adding external network providers. Note Automatic synchronization is enabled by default on the ovirt-provider-ovn network provider created by the engine-setup tool. Enter the URL or fully qualified domain name of the machine on which the external network provider is installed in the Provider URL text field, followed by the port number. The Read-Only check box is selected by default. This prevents users from modifying the external network provider. Important You must leave the Read-Only check box selected for your setup to be supported by Red Hat. Optionally, select the Requires Authentication check box and enter the Username and Password for the external network provider user registered in Keystone. You must also define the authentication URL of the Keystone server by defining the Protocol , Hostname , and API Port . Optionally, enter the Tenant for the external network provider. Test the credentials: Click Test to test whether you can authenticate successfully with the external network provider using the provided credentials. If the external network provider uses SSL, the Import provider certificates window opens; click OK to import the certificate that the external network provider provides to ensure the Manager can communicate with the instance. Click OK . Before you can use networks from this provider, you must install the virtual interface driver on the hosts and import the networks. To import networks, see Section 9.3.1, "Importing Networks From External Providers" . 14.2.10. Add Provider General Settings Explained The General tab in the Add Provider window allows you to register the core details of the external provider. Table 14.1. Add Provider: General Settings Setting Explanation Name A name to represent the provider in the Manager. Description A plain text, human-readable description of the provider. Type The type of external provider. Changing this setting alters the available fields for configuring the provider. Foreman/Satellite Provider URL : The URL or fully qualified domain name of the machine that hosts the Satellite instance. You do not need to add the port number to the end of the URL or fully qualified domain name. Requires Authentication : Allows you to specify whether authentication is required for the provider. Authentication is mandatory when Foreman/Satellite is selected. Username : A user name for connecting to the Satellite instance. This user name must be the user name used to log in to the provisioning portal on the Satellite instance. Password : The password against which the above user name is to be authenticated. This password must be the password used to log in to the provisioning portal on the Satellite instance. OpenStack Image Provider URL : The URL or fully qualified domain name of the machine on which the OpenStack Image service is hosted. You must add the port number for the OpenStack Image service to the end of the URL or fully qualified domain name. By default, this port number is 9292. Requires Authentication : Allows you to specify whether authentication is required to access the OpenStack Image service. Username : A user name for connecting to the Keystone server. This user name must be the user name for the OpenStack Image service registered in the Keystone instance of which the OpenStack Image service is a member. Password : The password against which the above user name is to be authenticated. This password must be the password for the OpenStack Image service registered in the Keystone instance of which the OpenStack Image service is a member. Protocol : The protocol used to communicate with the Keystone server. This must be set to HTTP . Hostname : The IP address or hostname of the Keystone server. API port : The API port number of the Keystone server. API Version : The version of the Keystone service. The value is v2.0 and the field is disabled. Tenant Name : The name of the OpenStack tenant of which the OpenStack Image service is a member. OpenStack Networking Networking Plugin : The networking plugin with which to connect to the OpenStack Networking server. For OpenStack Networking, Open vSwitch is the only option, and is selected by default. Automatic Synchronization : Allows you to specify whether the provider will be automatically synchronized with existing networks. Provider URL : The URL or fully qualified domain name of the machine on which the OpenStack Networking instance is hosted. You must add the port number for the OpenStack Networking instance to the end of the URL or fully qualified domain name. By default, this port number is 9696. Read Only : Allows you to specify whether the OpenStack Networking instance can be modified from the Administration Portal. Requires Authentication : Allows you to specify whether authentication is required to access the OpenStack Networking service. Username : A user name for connecting to the OpenStack Networking instance. This user name must be the user name for OpenStack Networking registered in the Keystone instance of which the OpenStack Networking instance is a member. Password : The password against which the above user name is to be authenticated. This password must be the password for OpenStack Networking registered in the Keystone instance of which the OpenStack Networking instance is a member. Protocol : The protocol used to communicate with the Keystone server. The default is HTTPS . Hostname : The IP address or hostname of the Keystone server. API port : The API port number of the Keystone server. API Version : The version of the Keystone server. This appears in the URL. If v2.0 appears, select v2.0 . If v3 appears select v3 . The following fields appear when you select v3 from the API Version field: User Domain Name : The name of the user defined in the domain. With Keystone API v3, domains are used to determine administrative boundaries of service entities in OpenStack. Domains allow you to group users together for various purposes, such as setting domain-specific configuration or security options. For more information, see OpenStack Identity (keystone) in the Red Hat OpenStack Platform Architecture Guide . Project Name : Defines the project name for OpenStack Identity API v3. Project Domain Name : Defines the project's domain name for OpenStack Identity API v3. The following field appears when you select v2.0 from the API Version field: Tenant Name : Appears only when v2 is selected from the API Version field. The name of the OpenStack tenant of which the OpenStack Networking instance is a member. OpenStack Volume Data Center : The data center to which OpenStack Volume storage volumes will be attached. Provider URL : The URL or fully qualified domain name of the machine on which the OpenStack Volume instance is hosted. You must add the port number for the OpenStack Volume instance to the end of the URL or fully qualified domain name. By default, this port number is 8776. Requires Authentication : Allows you to specify whether authentication is required to access the OpenStack Volume service. Username : A user name for connecting to the Keystone server. This user name must be the user name for OpenStack Volume registered in the Keystone instance of which the OpenStack Volume instance is a member. Password : The password against which the above user name is to be authenticated. This password must be the password for OpenStack Volume registered in the Keystone instance of which the OpenStack Volume instance is a member. Protocol : The protocol used to communicate with the Keystone server. This must be set to HTTP . Hostname : The IP address or hostname of the Keystone server. API port : The API port number of the Keystone server. API Version : The version of the Keystone server. The value is v2.0 and the field is disabled. Tenant Name : The name of the OpenStack tenant of which the OpenStack Volume instance is a member. VMware Data Center : Specify the data center into which VMware virtual machines will be imported, or select Any Data Center to specify the destination data center during individual import operations (using the Import function in the Virtual Machines tab). vCenter : The IP address or fully qualified domain name of the VMware vCenter instance. ESXi : The IP address or fully qualified domain name of the host from which the virtual machines will be imported. Data Center : The name of the data center in which the specified ESXi host resides. Cluster : The name of the cluster in which the specified ESXi host resides. Verify server's SSL certificate : Specify whether the ESXi host's certificate will be verified on connection. Proxy Host : Select a host in the chosen data center with virt-v2v installed to serve as the host during virtual machine import operations. This host must also be able to connect to the network of the VMware vCenter external provider. If you selected Any Data Center , you cannot choose the host here, but can specify a host during individual import operations (using the Import function in the Virtual Machines tab). Username : A user name for connecting to the VMware vCenter instance. The user must have access to the VMware data center and ESXi host on which the virtual machines reside. Password : The password against which the above user name is to be authenticated. RHEL 5 Xen Data Center : Specify the data center into which Xen virtual machines will be imported, or select Any Data Center to instead specify the destination data center during individual import operations (using the Import function in the Virtual Machines tab). URI : The URI of the RHEL 5 Xen host. Proxy Host : Select a host in the chosen data center with virt-v2v installed to serve as the host during virtual machine import operations. This host must also be able to connect to the network of the RHEL 5 Xen external provider. If you selected Any Data Center , you cannot choose the host here, but instead can specify a host during individual import operations (using the Import function in the Virtual Machines tab). KVM Data Center : Specify the data center into which KVM virtual machines will be imported, or select Any Data Center to instead specify the destination data center during individual import operations (using the Import function in the Virtual Machines tab). URI : The URI of the KVM host. Proxy Host : Select a host in the chosen data center to serve as the host during virtual machine import operations. This host must also be able to connect to the network of the KVM external provider. If you selected Any Data Center , you cannot choose the host here, but instead can specify a host during individual import operations (using the Import function in the Virtual Machines tab). Requires Authentication : Allows you to specify whether authentication is required to access the KVM host. Username : A user name for connecting to the KVM host. Password : The password against which the above user name is to be authenticated. External Network Provider Networking Plugin : Determines which implementation of the driver will be used on the host to handle NIC operations. If an external network provider with the oVirt Network Provider for OVN plugin is added as the default network provider for a cluster, this also determines which driver will be installed on hosts added to the cluster. Automatic Synchronization : Allows you to specify whether the provider will be automatically synchronized with existing networks. Provider URL : The URL or fully qualified domain name of the machine on which the external network provider is hosted. You must add the port number for the external network provider to the end of the URL or fully qualified domain name. By default, this port number is 9696. Read Only : Allows you to specify whether the external network provider can be modified from the Administration Portal. Requires Authentication : Allows you to specify whether authentication is required to access the external network provider. Username : A user name for connecting to the external network provider. If you are authenticating with Active Directory, the user name must be in the format of username @ domain @ auth_profile instead of the default username @ domain . Password : The password against which the above user name is to be authenticated. Protocol : The protocol used to communicate with the Keystone server. The default is HTTPS . Hostname : The IP address or hostname of the Keystone server. API port : The API port number of the Keystone server. API Version : The version of the Keystone server. The value is v2.0 and the field is disabled. Tenant Name : Optional. The name of the tenant of which the external network provider is a member. Test Allows users to test the specified credentials. This button is available to all provider types. 14.2.11. Add Provider Agent Configuration Settings Explained The Agent Configuration tab in the Add Provider window allows users to register details for networking plugins. This tab is only available for the OpenStack Networking provider type. Table 14.2. Add Provider: Agent Configuration Settings Setting Explanation Interface Mappings A comma-separated list of mappings in the format of label : interface . Broker Type The message broker type that the OpenStack Networking instance uses. Select RabbitMQ or Qpid . Host The URL or fully qualified domain name of the machine on which the message broker is installed. Port The remote port by which a connection with the above host is to be made. By default, this port is 5762 if SSL is not enabled on the host, and 5761 if SSL is enabled. Username A user name for authenticating the OpenStack Networking instance with the above message broker. By default, this user name is neutron . Password The password against which the above user name is to be authenticated. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= poolid",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms",
"yum update",
"yum install vdsm-hook-openstacknet",
"rbd_secret_uuid = UUID rbd_user = cinder",
"sudo -u vdsm ssh-keygen",
"sudo -u vdsm ssh-copy-id root@ xenhost.example.com",
"sudo -u vdsm ssh root@ xenhost.example.com",
"sudo -u vdsm ssh-keygen",
"sudo -u vdsm ssh-copy-id root@ kvmhost.example.com",
"sudo -u vdsm ssh root@ kvmhost.example.com",
"qemu+ssh://[email protected]/system",
"OVESETUP_OVN/ovirtProviderOvn=bool:True",
"Install ovirt-provider-ovn(Yes, No) [Yes]? :",
"Use default credentials (admin@internal) for ovirt-provider-ovn(Yes, No) [Yes]? :",
"oVirt OVN provider user[admin] : oVirt OVN provider password[empty] :",
"yum install ovirt-provider-ovn",
"[OVIRT] ovirt-host=https:// Manager_host_name",
"[OVN REMOTE] ovn-remote=tcp: OVN_central_server_IP :6641",
"firewall-cmd --zone= ZoneName --add-service=ovirt-provider-ovn --permanent firewall-cmd --zone= ZoneName --add-service=ovirt-provider-ovn-central --permanent firewall-cmd --reload",
"systemctl start ovirt-provider-ovn systemctl enable ovirt-provider-ovn",
"ovn-sbctl set-connection ptcp:6642 ovn-nbctl set-connection ptcp:6641",
"ansible-playbook --key-file <path_to_key_file> -i <path_to_inventory> --extra-vars \" cluster_name=<cluster_name> ovn_central=<ovn_central_ip_address> ovirt_network=<ovirt network name> ovn_tunneling_interface=<vdsm_network_name>\" ovirt-provider-ovn-driver.yml",
"ansible-playbook --key-file /etc/pki/ovirt-engine/keys/engine_id_rsa -i /usr/share/ovirt-engine-metrics/bin/ovirt-engine-hosts-ansible-inventory --extra-vars \" cluster_name=test-cluster ovn_central=192.168.200.2 ovirt_network=\\\"Long\\ Network\\ Name\\ with\\ \\Ascii\\ character\\ \\☺\\\"\" ovirt-provider-ovn-driver.yml",
"ansible-playbook --key-file /etc/pki/ovirt-engine/keys/engine_id_rsa -i /usr/share/ovirt-engine-metrics/bin/ovirt-engine-hosts-ansible-inventory --extra-vars \" cluster_name=test-cluster ovn_central=192.168.200.2 ovn_tunneling_interface=on703ea21ddbc34\" ovirt-provider-ovn-driver.yml",
"cd /usr/share/ovirt-engine/playbooks",
"ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa -i /usr/share/ovirt-engine-metrics/bin/ovirt-engine-hosts-ansible-inventory --extra-vars \" cluster_name= Cluster_Name ovn_central= OVN_Central_IP ovn_tunneling_interface= VDSM_Network_Name \" ovirt-provider-ovn-driver.yml",
"ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa -i /usr/share/ovirt-engine-metrics/bin/ovirt-engine-hosts-ansible-inventory --extra-vars \" cluster_name=MyCluster ovn_central=192.168.0.1 ovn_tunneling_interface=MyNetwork\" ovirt-provider-ovn-driver.yml",
"vdsm-tool ovn-config OVN_Central_IP Tunneling_IP_or_Network_Name",
"vdsm-tool ovn-config 192.168.0.1 MyNetwork"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Adding_External_Providers |
Chapter 7. Enabling notifications and integrations | Chapter 7. Enabling notifications and integrations You can enable the notifications service on Red Hat Hybrid Cloud Console to send notifications whenever a compliance policy is triggered. For example, you can configure the notifications service to automatically send an email message whenever a compliance policy falls below a certain threshold, or to send an email digest of all the compliance policy events that take place each day. Using the notifications service frees you from having to continually check the Red Hat Insights for RHEL dashboard for compliance event-triggered notifications. Enabling the notifications service requires three main steps: First, an Organization Administrator creates a User access group with the Notifications administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event or a daily digest of all compliance events. In addition to sending email messages, you can configure the notifications service to send event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data Additional resources For more information about how to set up notifications for compliance events, see Configuring notifications on the Red Hat Hybrid Cloud Console with FedRAMP . | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports_with_fedramp/assembly-enabling-notifications-integrations-for-compliance |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/remediation_fencing_and_maintenance/pr01 |
Chapter 1. Overview of compliance service reports | Chapter 1. Overview of compliance service reports The compliance service enables users to download data based on filters in place at the time of download. Downloading a compliance report requires the following actions: Uploading current system data to Red Hat Insights for Red Hat Enterprise Linux Filtering your results in the compliance service web console Downloading reports; either exporting comma separated values (CSV) or JavaScript Object Notation (JSON) data, or as a PDF | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports_with_fedramp/assembly-compl-report-overview |
14.7. Creating and Formatting New Images or Devices | 14.7. Creating and Formatting New Images or Devices Create the new disk image filename of size size and format format . If a base image is specified with -o backing_file= filename , the image will only record differences between itself and the base image. The backing file will not be modified unless you use the commit command. No size needs to be specified in this case. | [
"qemu-img create [-f format ] [-o options ] filename [ size ]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-creating_and_formatting_new_images_or_devices |
12.3. Requesting a CA-signed Certificate Through SCEP | 12.3. Requesting a CA-signed Certificate Through SCEP The Simple Certificate Enrollment Protocol (SCEP) automates and simplifies the process of certificate management with the CA. It lets a client request and retrieve a certificate over HTTP directly from the CA's SCEP service. This process is secured by a one-time PIN that is usually valid only for a limited time. The following example adds a SCEP CA configuration to certmonger , requests a new certificate, and adds it to the local NSS database. Add the CA configuration to certmonger : -c : Mandatory nickname for the CA configuration. The same value can later be passed to other getcert commands. -u : URL to the server's SCEP interface. Mandatory parameter when using an HTTPS URL: -R CA_Filename : Location of the PEM-formatted copy of the SCEP server's CA certificate, used for the HTTPS encryption. Verify that the CA configuration has been successfully added: The CA configuration was successfully added, when the CA certificate thumbprints were retrieved over SCEP and shown in the command's output. When accessing the server over unencrypted HTTP, manually compare the thumbprints with the ones displayed at the SCEP server to prevent a Man-in-the-middle attack. Request a certificate from the CA: -I : Name of the task. The same value can later be passed to the getcert list command. -c : CA configuration to submit the request to. -d : Directory with the NSS database to store the certificate and key. -n : Nickname of the certificate, used in the NSS database. -N : Subject name in the CSR. -L : Time-limited one-time PIN issued by the CA. Right after submitting the request, you can verify that a certificate was issued and correctly stored in the local database: The status MONITORING signifies a successful retrieval of the issued certificate. The getcert-list(1) man page lists other possible states and their meanings. | [
"getcert add-scep-ca -c CA_Name -u SCEP_URL",
"getcert list-cas -c CA_Name CA 'CA_Name': is-default: no ca-type: EXTERNAL helper-location: /usr/libexec/certmonger/scep-submit -u http://SCEP_server_enrollment_interface_URL SCEP CA certificate thumbprint (MD5): A67C2D4B 771AC186 FCCA654A 5E55AAF7 SCEP CA certificate thumbprint (SHA1): FBFF096C 6455E8E9 BD55F4A5 5787C43F 1F512279",
"getcert request -I Task_Name -c CA_Name -d /etc/pki/nssdb -n Certificate_Name -N cn=\" Subject Name \" -L one-time_PIN",
"getcert list -I TaskName Request ID 'Task_Name': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/pki/nssdb',nickname='TestCert',token='NSS Certificate DB' certificate: type=NSSDB,location='/etc/pki/nssdb',nickname='TestCert',token='NSS Certificate DB' signing request thumbprint (MD5): 503A8EDD DE2BE17E 5BAA3A57 D68C9C1B signing request thumbprint (SHA1): B411ECE4 D45B883A 75A6F14D 7E3037F1 D53625F4 CA: AD-Name issuer: CN=windows-CA,DC=ad,DC=example,DC=com subject: CN=Test Certificate expires: 2018-05-06 10:28:06 UTC key usage: digitalSignature,keyEncipherment eku: iso.org.dod.internet.security.mechanisms.8.2.2 certificate template/profile: IPSECIntermediateOffline pre-save command: post-save command: track: yes auto-renew: yes"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/certmonger-scep |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_jms_pool_library/making-open-source-more-inclusive |
Chapter 1. Overview | Chapter 1. Overview The Ruby software development kit is a Ruby gem that allows you to interact with the Red Hat Virtualization Manager in Ruby projects. By downloading these classes and adding them to your project, you can access a range of functionality for high-level automation of administrative tasks. 1.1. Prerequisites To install the Ruby software development kit, you must have: A system with Red Hat Enterprise Linux 7 installed. Both the Server and Workstation variants are supported. A subscription to Red Hat Virtualization entitlements. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/chap-overview |
Chapter 23. CICS | Chapter 23. CICS Since Camel 4.4-redhat Only producer is supported. This component allows you to interact with the IBM CICS (R) general-purpose transaction processing subsystem. Note Only synchronous mode call are supported. 23.1. Dependencies When using camel-cics with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cics-starter</artifactId> </dependency> You must also declare the ctgclient.jar dependency when working with camel-cics starer. This JAR is provided by IBM and is included in the cics system. 23.2. URI format Where interfaceType is the CICS set of the external API that the camel-cics invokes. At the moment, only ECI (External Call Interface) is supported. This component communicates with the CICS server using two kinds of dataExchangeType . commarea is a block of storage, limited to 32763 bytes, allocated by the program. channel is the new mechanism for exchanging data, analogous to a parameter list. By default, if dataExchangeType is not specified, this component uses commarea : To use the channel and the container you must specify it explicitly in the URI 23.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 23.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 23.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 23.4. Component Options The CICS component supports 17 options, which are listed below. Name Description Default Type ctgDebug Enable debug mode on the underlying IBM CGT client. false java.lang.Boolean eciBinding The Binding instance to transform a Camel Exchange to EciRequest and vice versa com.redhat.camel.component.cics.CICSEciBinding eciTimeout The ECI timeout value associated with this ECIRequest object. An ECI timeout value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway. An ECI timeout value greater than zero indicates that the ECIRequest may be timed out by CICS Transaction Gateway. ECI timeout can expire before a response is received from CICS. This means that the client does not receive the confirmation from CICS that a unit of work has been backed out or committed. 0 short encoding The transfer encoding of the message. Cp1145 java.lang.String gatewayFactory The connection factory to be used com.redhat.camel.component.cics.pool.CICSGatewayFactory host The address of the CICS Transaction Gateway that this instance connects to java.lang.String lazyStartProducer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. boolean port The port of the CICS Transaction Gateway that this instance connects to. 2006 int protocol the protocol that this component will use to connect to the CICS Transaction Gateway. tcp java.lang.String server The address of the CICS server that this instance connects to. java.lang.String sslKeyring The full classname of the SSL key ring class or keystore file to be used for the client encrypted connection. java.lang.String sslPassword The password for the encrypted key ring class or keystore java.lang.String configuration To use a shared CICS configuration com.redhat.camel.component.cics.CICSConfiguration socketConnectionTimeout The socket connection timeout int password Password to use for authentication java.lang.String userId User ID to use for authentication java.lang.String 23.5. Endpoint Options The CICS endpoint is configured using URI syntax: With the following path and query parameters: 23.5.1. Path Parameters (2 parameters) Name Description Default Type interfaceType The interface type, can be eci, esi or epi. at the moment only eci is supported. eci java.lang.String dataExchangeType The kind of data exchange to use Enum value: commarea channel commarea com.redhat.camel.component.cics.support.CICSDataExchangeType 23.5.2. Query Parameters (15 parameters) Name Description Default Type ctgDebug Enable debug mode on the underlying IBM CGT client. false java.lang.Boolean eciBinding The Binding instance to transform a Camel Exchange to EciRequest and vice versa com.redhat.camel.component.cics.CICSEciBinding eciTimeout The ECI timeout value associated with this ECIRequest object. An ECI timeout value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway. An ECI timeout value greater than zero indicates that the ECIRequest may be timed out by CICS Transaction Gateway. ECI timeout can expire before a response is received from CICS. This means that the client does not receive the confirmation from CICS that a unit of work has been backed out or committed. 0 short encoding Encoding to convert COMMAREA data to before sending. Cp1145 java.lang.String gatewayFactory The connection factory to use com.redhat.camel.component.cics.pool.CICSGatewayFactory host The address of the CICS Transaction Gateway that this instance connects to localhost java.lang.String port The port of the CICS Transaction Gateway that this instance connects to. 2006 int protocol the protocol that this component will use to connect to the CICS Transaction Gateway. tcp java.lang.String server The address of the CICS server that this instance connects to java.lang.String lazyStartProducer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. boolean sslKeyring The full class name of the SSL key ring class or keystore file to be used for the client encrypted connection java.lang.String sslPassword The password for the encrypted key ring class or keystore java.lang.String socketConnectionTimeout The socket connection timeout int password Password to use for authentication java.lang.String userId User ID to use for authentication java.lang.String 23.6. Message Headers The CICS component supports 15 message header(s), which is/are listed below: Name Description Default Type CICS_RETURN_CODE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_RETURN_CODE_HEADER Return code from this flow operation. int CICS_RETURN_CODE_STRING Constant: com.redhat.camel.component.cics.CICSConstants#CICS_RETURN_CODE_STRING_HEADER The CICS return code as a String. The String is the name of the appropriate Java constant, for example, if this header is ECI_NO_ERROR, then the String returned will be ECI_NO_ERROR. If this header is unknown then the String returned will be ECI_UNKNOWN_CICS_RC. Note For CICS return codes that may have more than one meaning the String returned is a concatenation of the return codes. The only concatenated String is: ECI_ERR_REQUEST_TIMEOUT_OR_ERR_NO_REPLY. java.ang.String CICS_EXTEND_MODE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_EXTEND_MODE_HEADER Extend mode of request. The default value is ECI_NO_EXTEND. int CICS_LUW_TOKEN Constant: com.redhat.camel.component.cics.CICSConstants#CICS_LUW_TOKEN_HEADER Extended Logical Unit of Work token. The default value is ECI_LUW_NEW. int CICS_PROGRAM_NAME Constant: com.redhat.camel.component.cics.CICSConstants#CICS_PROGRAM_NAME_HEADER Program to invoke on CICS server. java.lang.String CICS_TRANSACTION_ID Constant: com.redhat.camel.component.cics.CICSConstants#CICS_TRANSACTION_ID_HEADER Transaction ID to run CICS program under. java.lang.String CICS_COMM_AREA_SIZE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_COMM_AREA_SIZE_HEADER Length of COMMAREA. The default value is 0. int CICS_CHANNEL_NAME Constant: com.redhat.camel.component.cics.CICSConstants#CICS_CHANNEL_NAME_HEADER The name of the channel to create com.redhat.camel.component.cics.CICSConstants#CICS_CHANNEL_NAME_HEADER java.lang.String CICS_CONTAINER_NAME Constant: com.redhat.camel.component.cics.CICSConstants#CICS_CONTAINER_NAME_HEADER The name of the container to create. java.lang.String CICS_CHANNEL_CCSID Constant: com.redhat.camel.component.cics.CICSConstants#CICS_CHANNEL_CCSID_HEADER The CCSID the channel should set as its default. int CICS_SERVER Constant: com.redhat.camel.component.cics.CICSConstants#CICS_SERVER_HEADER CICS server to direct request to. This header overrides the value configured in the endpoint. java.lang.String CICS_USER_ID Constant: com.redhat.camel.component.cics.CICSConstants#CICS_USER_ID_HEADER User ID for CICS server. This header overrides the value configured in the endpoint. java.lang.String CICS_PASSWORD Constant: com.redhat.camel.component.cics.CICSConstants#CICS_PASSWORD_HEADER Password or password phrase for CICS server. This header overrides the value configured in the endpoint. java.lang.String CICS_ABEND_CODE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_ABEND_CODE_HEADER CICS transaction abend code. java.lang.String CICS_ECI_REQUEST_TIMEOUT Constant: com.redhat.camel.component.cics.CICSConstants#CICS_ECI_REQUEST_TIMEOUT_HEADER The value, in seconds, of the ECI timeout for the current ECIRequest. A value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway 0 short CICS_ENCODING Constant: com.redhat.camel.component.cics.CICSConstants#CICS_ENCODING_HEADER Encoding to convert COMMAREA data to before sending. String 23.7. Samples 23.7.1. Using Commarea Following sample show how to configure a route that runs a program on a CICS server using COMMAREA. The COMMAREA size has to be defined in CICS_COMM_AREA_SIZE header, while the COMMAREA input data is defined in the Camel Exchange body. Note You must create a COMMAREA that is large enough to contain all the information to be sent to the server and large enough to contain all the information that can be returned from the server. //..... import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_COMM_AREA_SIZE_HEADER; //.... from("direct:run"). setHeader(CICS_PROGRAM_NAME_HEADER, "ECIREADY"). setHeader(CICS_COMM_AREA_SIZE_HEADER, 18). setBody(constant("My input data")). to("cics:eci/commarea?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar"); The Outcome of the CICS program invocation is mapped to Camel Exchange in this way: The numeric value of return code is stored in the CICS_RETURN_CODE header The COMMAREA output data is stored in the Camel Exchange Body. 23.7.2. Using Channel with a single input container Following sample shows how to use a channel with a single container to run a CICS program. The channel name and the container name are taken from headers, and the container value from the body: //..... import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CONTAINER_NAME_HEADER; //... from("direct:run"). setHeader(CICS_PROGRAM_NAME_HEADER, "EC03"). setHeader(CICS_CHANNEL_NAME_HEADER, "SAMPLECHANNEL"). setHeader(CICS_CONTAINER_NAME_HEADER, "INPUTDATA"). setBody(constant("My input data")). to("cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar"); The container(s) returned is stored in an java.util.Map<String,Object> , the key is the container name and the value is the output data of the container. 23.7.3. Using Channel with multiple input container If you need to run a CICS program that takes multiple container as input, you can create a java.util.Map<String,Object> where the keys are the container names and the values are the input data. In this case the CICS_CONTAINER_NAME header is ignored. //..... import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; //... from("direct:run"). setHeader(CICS_PROGRAM_NAME_HEADER, "EC03"). setHeader(CICS_CHANNEL_NAME_HEADER, "SAMPLECHANNEL"). process(exchange->{ byte[] thirdContainerData = HexFormat.of().parseHex("e04fd020ea3a6910a2d808002b30309d"); Map<String,Object> containers = Map.of( "firstContainerName", "firstContainerData", "secondContainerName", "secondContainerData", "thirdContainerName", thirdContainerData ); exchange.getMessage().setBody(containers); }). to("cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar"); 23.8. Spring Boot Auto-Configuration The component supports 17 options, which are listed below. Name Description Default Type camel.component.cics.binding The Binding instance to transform a Camel Exchange to EciRequest and vice versa. com.redhat.camel.component.cics.CICSEciBinding camel.component.cics.configuration Configuration. com.redhat.camel.component.cics.CICSConfiguration camel.component.cics.ctg-debug Enable debug mode on the underlying IBM CGT client. java.lang.Boolean camel.component.cics.eci-timeout The ECI timeout value associated with this ECIRequest object. An ECI timeout value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway. An ECI timeout value greater than zero indicates that the ECIRequest may be timed out by CICS Transaction Gateway. ECI timeout can expire before a response is received from CICS. This means that the client does not receive the confirmation from CICS that a unit of work has been backed out or committed. java.lang.Short camel.component.cics.enabled Whether to enable auto configuration of the cics component. This is enabled by default. java.lang.Boolean camel.component.cics.encoding The transfer encoding of the message. Cp1145 java.lang.String camel.component.cics.gateway-factory The connection factory to be use. The option is a com.redhat.camel.component.cics.pool.CICSGatewayFactory type. com.redhat.camel.component.cics.pool.CICSGatewayFactory camel.component.cics.host The address of the CICS Transaction Gateway that this instance connects to java.lang.String camel.component.cics.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. java.lang.Boolean camel.component.cics.password Password to use for authentication java.lang.String camel.component.cics.port The port of the CICS Transaction Gateway that this instance connects to. 2006 java.lang.Integer camel.component.cics.protocol the protocol that this component will use to connect to the CICS Transaction Gateway. tcp java.lang.String camel.component.cics.server The address of the CICS server that this instance connects to java.lang.String camel.component.cics.socket-connection-timeout The socket connection timeout java.lang.Integer camel.component.cics.ssl-keyring The full classname of the SSL key ring class or keystore file to be used for the client encrypted connection java.lang.String camel.component.cics.ssl-password The password for the encrypted key ring class or keystore java.lang.String camel.component.cics.user-id User ID to use for authentication java.lang.String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cics-starter</artifactId> </dependency>",
"<dependency> <artifactId>com.ibm</artifactId> <groupId>ctgclient</groupId> <scope>system</scope> <systemPath>USD{basedir}/lib/ctgclient.jar</systemPath> </dependency>",
"cics://[interfaceType]/[dataExchangeType][?options]",
"cics://eci?host=xxx&port=xxx",
"cics://eci/channel?host=xxx&port=xxx",
"cics://[interfaceType]/[dataExchangeType][?options]",
"//.. import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_COMM_AREA_SIZE_HEADER; //. from(\"direct:run\"). setHeader(CICS_PROGRAM_NAME_HEADER, \"ECIREADY\"). setHeader(CICS_COMM_AREA_SIZE_HEADER, 18). setBody(constant(\"My input data\")). to(\"cics:eci/commarea?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar\");",
"//.. import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CONTAINER_NAME_HEADER; // from(\"direct:run\"). setHeader(CICS_PROGRAM_NAME_HEADER, \"EC03\"). setHeader(CICS_CHANNEL_NAME_HEADER, \"SAMPLECHANNEL\"). setHeader(CICS_CONTAINER_NAME_HEADER, \"INPUTDATA\"). setBody(constant(\"My input data\")). to(\"cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar\");",
"//.. import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; // from(\"direct:run\"). setHeader(CICS_PROGRAM_NAME_HEADER, \"EC03\"). setHeader(CICS_CHANNEL_NAME_HEADER, \"SAMPLECHANNEL\"). process(exchange->{ byte[] thirdContainerData = HexFormat.of().parseHex(\"e04fd020ea3a6910a2d808002b30309d\"); Map<String,Object> containers = Map.of( \"firstContainerName\", \"firstContainerData\", \"secondContainerName\", \"secondContainerData\", \"thirdContainerName\", thirdContainerData ); exchange.getMessage().setBody(containers); }). to(\"cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cics-component-starter |
Chapter 6. Working with Helm charts | Chapter 6. Working with Helm charts 6.1. Understanding Helm Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Container Platform resources. Creating a chart in a cluster creates a running instance of the chart known as a release . Each time a chart is created, or a release is upgraded or rolled back, an incremental revision is created. 6.1.1. Key features Helm provides the ability to: Search through a large collection of charts stored in the chart repository. Modify existing charts. Create your own charts with OpenShift Container Platform or Kubernetes resources. Package and share your applications as charts. 6.1.2. Red Hat Certification of Helm charts for OpenShift You can choose to verify and certify your Helm charts by Red Hat for all the components you will be deploying on the Red Hat OpenShift Container Platform. Charts go through an automated Red Hat OpenShift certification workflow that guarantees security compliance as well as best integration and experience with the platform. Certification assures the integrity of the chart and ensures that the Helm chart works seamlessly on Red Hat OpenShift clusters. 6.1.3. Additional resources For more information on how to certify your Helm charts as a Red Hat partner, see Red Hat Certification of Helm charts for OpenShift . For more information on OpenShift and Container certification guides for Red Hat partners, see Partner Guide for OpenShift and Container Certification . For a list of the charts, see the Red Hat Helm index file . You can view the available charts at the Red Hat Marketplace . For more information, see Using the Red Hat Marketplace . 6.2. Installing Helm The following section describes how to install Helm on different platforms using the CLI. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Prerequisites You have installed Go, version 1.13 or higher. 6.2.1. On Linux Download the Helm binary and add it to your path: Linux (x86_64, amd64) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm Linux on IBM Power(R) (ppc64le) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 6.2.2. On Windows 7/8 Download the latest .exe file and put in a directory of your preference. Right click Start and click Control Panel . Select System and Security and then click System . From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom. Select Path from the Variable section and click Edit . Click New and type the path to the folder with the .exe file into the field or click Browse and select the directory, and click OK . 6.2.3. On Windows 10 Download the latest .exe file and put in a directory of your preference. Click Search and type env or environment . Select Edit environment variables for your account . Select Path from the Variable section and click Edit . Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK . 6.2.4. On MacOS Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 6.3. Configuring custom Helm chart repositories You can create Helm releases on an OpenShift Container Platform cluster using the following methods: The CLI. The Developer perspective of the web console. The Developer Catalog , in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see the Red Hat Helm index file . As a cluster administrator, you can add multiple cluster-scoped and namespace-scoped Helm chart repositories, separate from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . As a regular user or project member with the appropriate role-based access control (RBAC) permissions, you can add multiple namespace-scoped Helm chart repositories, apart from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . In the Developer perspective of the web console, you can use the Helm page to: Create Helm Releases and Repositories using the Create button. Create, update, or delete a cluster-scoped or namespace-scoped Helm chart repository. View the list of the existing Helm chart repositories in the Repositories tab, which can also be easily distinguished as either cluster scoped or namespace scoped. 6.3.1. Installing a Helm chart on an OpenShift Container Platform cluster Prerequisites You have a running OpenShift Container Platform cluster and you have logged into it. You have installed Helm. Procedure Create a new project: USD oc new-project vault Add a repository of Helm charts to your local Helm client: USD helm repo add openshift-helm-charts https://charts.openshift.io/ Example output "openshift-helm-charts" has been added to your repositories Update the repository: USD helm repo update Install an example HashiCorp Vault: USD helm install example-vault openshift-helm-charts/hashicorp-vault Example output NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault! Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2 6.3.2. Creating Helm releases using the Developer perspective You can use either the Developer perspective in the web console or the CLI to select and create a release from the Helm charts listed in the Developer Catalog . You can create Helm releases by installing Helm charts and see them in the Developer perspective of the web console. Prerequisites You have logged in to the web console and have switched to the Developer perspective . Procedure To create Helm releases from the Helm charts provided in the Developer Catalog : In the Developer perspective, navigate to the +Add view and select a project. Then click Helm Chart option to see all the Helm Charts in the Developer Catalog . Select a chart and read the description, README, and other details about the chart. Click Create . Figure 6.1. Helm charts in developer catalog In the Create Helm Release page: Enter a unique name for the release in the Release Name field. Select the required chart version from the Chart Version drop-down list. Configure your Helm chart by using the Form View or the YAML View . Note Where available, you can switch between the YAML View and Form View . The data is persisted when switching between the views. Click Create to create a Helm release. The web console displays the new release in the Topology view. If a Helm chart has release notes, the web console displays them. If a Helm chart creates workloads, the web console displays them on the Topology or Helm release details page. The workloads are DaemonSet , CronJob , Pod , Deployment , and DeploymentConfig . View the newly created Helm release in the Helm Releases page. You can upgrade, rollback, or delete a Helm release by using the Actions button on the side panel or by right-clicking a Helm release. 6.3.3. Using Helm in the web terminal You can use Helm by Accessing the web terminal in the Developer perspective of the web console. 6.3.4. Creating a custom Helm chart on OpenShift Container Platform Procedure Create a new project: USD oc new-project nodejs-ex-k Download an example Node.js chart that contains OpenShift Container Platform objects: USD git clone https://github.com/redhat-developer/redhat-helm-charts Go to the directory with the sample chart: USD cd redhat-helm-charts/alpha/nodejs-ex-k/ Edit the Chart.yaml file and add a description of your chart: apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5 1 The chart API version. It should be v2 for Helm charts that require at least Helm 3. 2 The name of your chart. 3 The description of your chart. 4 The URL to an image to be used as an icon. 5 The Version of your chart as per the Semantic Versioning (SemVer) 2.0.0 Specification. Verify that the chart is formatted properly: USD helm lint Example output [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed Navigate to the directory level: USD cd .. Install the chart: USD helm install nodejs-chart nodejs-ex-k Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0 6.3.5. Adding custom Helm chart repositories As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster. Sample Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository, run: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 6.2. Chart repositories filter Note If a cluster administrator removes all of the chart repositories, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel. 6.3.6. Adding namespace-scoped custom Helm chart repositories The cluster-scoped HelmChartRepository custom resource definition (CRD) for Helm repository provides the ability for administrators to add Helm repositories as custom resources. The namespace-scoped ProjectHelmChartRepository CRD allows project members with the appropriate role-based access control (RBAC) permissions to create Helm repository resources of their choice but scoped to their namespace. Such project members can see charts from both cluster-scoped and namespace-scoped Helm repository resources. Note Administrators can limit users from creating namespace-scoped Helm repository resources. By limiting users, administrators have the flexibility to control the RBAC through a namespace role instead of a cluster role. This avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications. The addition of the namespace-scoped Helm repository does not impact the behavior of the existing cluster-scoped Helm repository. As a regular user or project member with the appropriate RBAC permissions, you can add custom namespace-scoped Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new namespace-scoped Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your namespace. Sample Namespace-scoped Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository scoped to your my-namespace namespace, run: USD cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF The output verifies that the namespace-scoped Helm Chart Repository CR is created: Example output Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed in your my-namespace namespace. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 6.3. Chart repositories filter in your namespace Alternatively, run: USD oc get projecthelmchartrepositories --namespace my-namespace Example output Note If a cluster administrator or a regular user with appropriate RBAC permissions removes all of the chart repositories in a specific namespace, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel for that specific namespace. 6.3.7. Creating credentials and CA certificates to add Helm chart repositories Some Helm chart repositories need credentials and custom certificate authority (CA) certificates to connect to it. You can use the web console as well as the CLI to add credentials and certificates. Procedure To configure the credentials and certificates, and then add a Helm chart repository using the CLI: In the openshift-config namespace, create a ConfigMap object with a custom CA certificate in PEM encoded format, and store it under the ca-bundle.crt key within the config map: USD oc create configmap helm-ca-cert \ --from-file=ca-bundle.crt=/path/to/certs/ca.crt \ -n openshift-config In the openshift-config namespace, create a Secret object to add the client TLS configurations: USD oc create secret tls helm-tls-configs \ --cert=/path/to/certs/client.crt \ --key=/path/to/certs/client.key \ -n openshift-config Note that the client certificate and key must be in PEM encoded format and stored under the keys tls.crt and tls.key , respectively. Add the Helm repository as follows: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF The ConfigMap and Secret are consumed in the HelmChartRepository CR using the tlsConfig and ca fields. These certificates are used to connect to the Helm repository URL. By default, all authenticated users have access to all configured charts. However, for chart repositories where certificates are needed, you must provide users with read access to the helm-ca-cert config map and helm-tls-configs secret in the openshift-config namespace, as follows: USD cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["helm-ca-cert"] verbs: ["get"] - apiGroups: [""] resources: ["secrets"] resourceNames: ["helm-tls-configs"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF 6.3.8. Filtering Helm Charts by their certification level You can filter Helm charts based on their certification level in the Developer Catalog . Procedure In the Developer perspective, navigate to the +Add view and select a project. From the Developer Catalog tile, select the Helm Chart option to see all the Helm charts in the Developer Catalog . Use the filters to the left of the list of Helm charts to filter the required charts: Use the Chart Repositories filter to filter charts provided by Red Hat Certification Charts or OpenShift Helm Charts . Use the Source filter to filter charts sourced from Partners , Community , or Red Hat . Certified charts are indicated with the ( ) icon. Note The Source filter will not be visible when there is only one provider type. You can now select the required chart and install it. 6.3.9. Disabling Helm Chart repositories You can disable Helm Charts from a particular Helm Chart Repository in the catalog by setting the disabled property in the HelmChartRepository custom resource to true . Procedure To disable a Helm Chart repository by using CLI, add the disabled: true flag to the custom resource. For example, to remove an Azure sample chart repository, run: To disable a recently added Helm Chart repository by using Web Console: Go to Custom Resource Definitions and search for the HelmChartRepository custom resource. Go to Instances , find the repository you want to disable, and click its name. Go to the YAML tab, add the disabled: true flag in the spec section, and click Save . Example The repository is now disabled and will not appear in the catalog. 6.4. Working with Helm releases You can use the Developer perspective in the web console to update, rollback, or delete a Helm release. 6.4.1. Prerequisites You have logged in to the web console and have switched to the Developer perspective . 6.4.2. Upgrading a Helm release You can upgrade a Helm release to upgrade to a new chart version or update your release configuration. Procedure In the Topology view, select the Helm release to see the side panel. Click Actions Upgrade Helm Release . In the Upgrade Helm Release page, select the Chart Version you want to upgrade to, and then click Upgrade to create another Helm release. The Helm Releases page displays the two revisions. 6.4.3. Rolling back a Helm release If a release fails, you can rollback the Helm release to a version. Procedure To rollback a release using the Helm view: In the Developer perspective, navigate to the Helm view to see the Helm Releases in the namespace. Click the Options menu adjoining the listed release, and select Rollback . In the Rollback Helm Release page, select the Revision you want to rollback to and click Rollback . In the Helm Releases page, click on the chart to see the details and resources for that release. Go to the Revision History tab to see all the revisions for the chart. Figure 6.4. Helm revision history If required, you can further use the Options menu adjoining a particular revision and select the revision to rollback to. 6.4.4. Deleting a Helm release Procedure In the Topology view, right-click the Helm release and select Delete Helm Release . In the confirmation prompt, enter the name of the chart and click Delete . | [
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"projecthelmchartrepository.helm.openshift.io/azure-sample-repo created",
"oc get projecthelmchartrepositories --namespace my-namespace",
"NAME AGE azure-sample-repo 1m",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/building_applications/working-with-helm-charts |
Server Configuration Guide | Server Configuration Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services | [
"bin/kc.[sh|bat] start --db-url-host=mykeycloakdb",
"export KC_DB_URL_HOST=mykeycloakdb",
"db-url-host=mykeycloakdb",
"bin/kc.[sh|bat] start --help",
"db-url-host=USD{MY_DB_HOST}",
"db-url-host=USD{MY_DB_HOST:mydb}",
"bin/kc.[sh|bat] --config-file=/path/to/myconfig.conf start",
"keytool -importpass -alias kc.db-password -keystore keystore.p12 -storepass keystorepass -storetype PKCS12 -v",
"bin/kc.[sh|bat] start --config-keystore=/path/to/keystore.p12 --config-keystore-password=keystorepass --config-keystore-type=PKCS12",
"bin/kc.[sh|bat] start-dev",
"bin/kc.[sh|bat] start",
"bin/kc.[sh|bat] build <build-options>",
"bin/kc.[sh|bat] build --help",
"bin/kc.[sh|bat] build --db=postgres",
"bin/kc.[sh|bat] start --optimized <configuration-options>",
"bin/kc.[sh|bat] build --db=postgres",
"db-url-host=keycloak-postgres db-username=keycloak db-password=change_me hostname=mykeycloak.acme.com https-certificate-file",
"bin/kc.[sh|bat] start --optimized",
"bin/kc.[sh|bat] start --spi-admin-allowed-system-variables=FOO,BAR",
"export JAVA_OPTS_APPEND=\"-Djava.net.preferIPv4Stack=true\"",
"bin/kc.[sh|bat] start --bootstrap-admin-username tmpadm --bootstrap-admin-password pass",
"bin/kc.[sh|bat] start-dev --bootstrap-admin-client-id tmpadm --bootstrap-admin-client-secret secret",
"bin/kc.[sh|bat] bootstrap-admin user",
"bin/kc.[sh|bat] bootstrap-admin user --username tmpadm --password:env PASS_VAR",
"bin/kc.[sh|bat] bootstrap-admin service",
"bin/kc.[sh|bat] bootstrap-admin service --client-id tmpclient --client-secret:env=SECRET_VAR",
"bin/kcadm.[sh|bat] config credentials --server http://localhost:8080 --realm master --client <service_account_client_name> --secret <service_account_secret>",
"bin/kcadm.[sh|bat] get users/{userId}/credentials -r {realm}",
"bin/kcadm.[sh|bat] delete users/{userId}/credentials/{credentialId} -r {realm}",
"bin/kc.[sh|bat] bootstrap-admin user --username tmpadm --no-prompt",
"bin/kc.[sh|bat] bootstrap-admin user --password:env PASS_VAR --no-prompt",
"bin/kc.[sh|bat] bootstrap-admin user --username:env <YourUsernameEnv> --password:env <YourPassEnv>",
"bin/kc.[sh|bat] bootstrap-admin service --client-id:env <YourClientIdEnv> --client-secret:env <YourSecretEnv>",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname \"CN=server\" -alias server -ext \"SAN:c=DNS:localhost,IP:127.0.0.1\" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:26 COPY --from=builder /opt/keycloak/ /opt/keycloak/ change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]",
"A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && dnf --installroot /mnt/rootfs clean all && rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs /",
"build . -t mykeycloak",
"run --name mykeycloak -p 8443:8443 -p 9000:9000 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname=localhost",
"run --name mykeycloak -p 3000:8443 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname=https://localhost:3000",
"run --name mykeycloak -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev",
"run --name mykeycloak -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:26 start --db=postgres --features=token-exchange --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> --https-key-store-file=<file> --https-key-store-password=<password>",
"setting the admin username -e KC_BOOTSTRAP_ADMIN_USERNAME=<admin-user-name> setting the initial password -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me",
"run --name keycloak_unoptimized -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me -v /path/to/realm/data:/opt/keycloak/data/import registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev --import-realm",
"run --name mykeycloak -p 8080:8080 -m 1g -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me -e JAVA_OPTS_KC_HEAP=\"-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65\" registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev",
"bin/kc.[sh|bat] start --https-certificate-file=/path/to/certfile.pem --https-certificate-key-file=/path/to/keyfile.pem",
"bin/kc.[sh|bat] start --https-key-store-file=/path/to/existing-keystore-file",
"bin/kc.[sh|bat] start --https-key-store-password=<value>",
"bin/kc.[sh|bat] start --https-protocols=<protocol>[,<protocol>]",
"bin/kc.[sh|bat] start --https-port=<port>",
"bin/kc.[sh|bat] start --hostname my.keycloak.org",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org:123/auth",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org --http-enabled true",
"bin/kc.[sh|bat] start --hostname-strict false --proxy-headers forwarded",
"bin/kc.[sh|bat] start --hostname my.keycloak.org --proxy-headers xforwarded",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org --proxy-headers xforwarded",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443",
"bin/kc.[sh|bat] start --hostname my.keycloak.org",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true",
"bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443",
"bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true",
"bin/kc.[sh|bat] start --proxy-headers forwarded",
"bin/kc.[sh|bat] start --spi-sticky-session-encoder-infinispan-should-attach-route=false",
"bin/kc.[sh|bat] start --proxy-headers forwarded --proxy-trusted-addresses=192.168.0.32,127.0.0.0/8",
"bin/kc.[sh|bat] start --proxy-protocol-enabled true",
"bin/kc.[sh|bat] build --spi-x509cert-lookup-provider=<provider>",
"bin/kc.[sh|bat] start --spi-x509cert-lookup-<provider>-ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup-<provider>-ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup-<provider>-certificate-chain-length=10",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.5.0.24.07/ojdbc11-23.5.0.24.07.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.5.0.24.07/orai18n-23.5.0.24.07.jar /opt/keycloak/providers/orai18n.jar Setting the build parameter for the database: ENV KC_DB=oracle Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.8.1.jre11/mssql-jdbc-12.8.1.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar Setting the build parameter for the database: ENV KC_DB=mssql Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build",
"bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me",
"bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase",
"bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver",
"show server_encoding;",
"create database keycloak with encoding 'UTF8';",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chmod=0666 https://github.com/awslabs/aws-advanced-jdbc-wrapper/releases/download/2.3.1/aws-advanced-jdbc-wrapper-2.3.1.jar /opt/keycloak/providers/aws-advanced-jdbc-wrapper.jar",
"bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900",
"bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=true",
"bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual",
"bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-initialize-empty=false",
"bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql>",
"bin/kc.[sh|bat] start --cache=ispn",
"<distributed-cache name=\"sessions\" owners=\"2\"> <expiration lifespan=\"-1\"/> </distributed-cache>",
"bin/kc.sh start --features-disabled=persistent-user-sessions",
"bin/kc.[sh|bat] start --cache-config-file=my-cache-file.xml",
"bin/kc.[sh|bat] start --cache-stack=<stack>",
"bin/kc.[sh|bat] start --cache-stack=<ec2|google|azure>",
"<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> <SSL_KEY_EXCHANGE keystore_name=\"server.jks\" keystore_password=\"password\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"keycloak\"> <transport lock-timeout=\"60000\" stack=\"my-encrypt-udp\"/> </cache-container>",
"bin/kc.[sh|bat] start --metrics-enabled=true --cache-metrics-histograms-enabled=true",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value>",
"HTTPS_PROXY=https://www-proxy.acme.com:8080 NO_PROXY=google.com,login.facebook.com",
".*\\.(google|googleapis)\\.com",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings='.*\\\\.(google|googleapis)\\\\.com;http://www-proxy.acme.com:8080'",
".*\\.(google|googleapis)\\.com;http://proxyuser:[email protected]:8080",
"All requests to Google APIs use http://www-proxy.acme.com:8080 as proxy .*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080 All requests to internal systems use no proxy .*\\.acme\\.com;NO_PROXY All other requests use http://fallback:8080 as proxy .*;http://fallback:8080",
"bin/kc.[sh|bat] start --truststore-paths=/opt/truststore/myTrustStore.pfx,/opt/other-truststore/myOtherTrustStore.pem",
"bin/kc.[sh|bat] start --https-client-auth=<none|request|required>",
"bin/kc.[sh|bat] start --https-trust-store-file=/path/to/file --https-trust-store-password=<value>",
"bin/kc.[sh|bat] build --features=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features=\"docker,token-exchange\"",
"bin/kc.[sh|bat] build --features=\"preview\"",
"bin/kc.[sh|bat] build --features-disabled=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features-disabled=\"impersonation\"",
"spi-<spi-id>-<provider-id>-<property>=<value>",
"spi-connections-http-client-default-connection-pool-size=10",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-connection-pool-size=10",
"bin/kc.[sh|bat] build --spi-email-template-provider=mycustomprovider",
"bin/kc.[sh|bat] build --spi-password-hashing-provider-default=mycustomprovider",
"bin/kc.[sh|bat] build --spi-email-template-mycustomprovider-enabled=true",
"bin/kc.[sh|bat] start --log-level=<root-level>",
"bin/kc.[sh|bat] start --log-level=\"<root-level>,<org.category1>:<org.category1-level>\"",
"bin/kc.[sh|bat] start --log-level=\"INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info\"",
"bin/kc.[sh|bat] start --log=\"<handler1>,<handler2>\"",
"bin/kc.[sh|bat] start --log=console,file --log-level=debug --log-console-level=info",
"bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug --log-console-level=warn --log-syslog-level=warn",
"bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug,org.keycloak.events:trace, --log-syslog-level=trace --log-console-level=info --log-file-level=info",
"bin/kc.[sh|bat] start --log-console-format=\"'<format>'\"",
"bin/kc.[sh|bat] start --log-console-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"",
"bin/kc.[sh|bat] start --log-console-output=json",
"{\"timestamp\":\"2022-02-25T10:31:32.452+01:00\",\"sequence\":8442,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080\",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host-name\",\"processName\":\"QuarkusEntryPoint\",\"processId\":36946}",
"bin/kc.[sh|bat] start --log-console-output=default",
"2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080",
"bin/kc.[sh|bat] start --log-console-color=<false|true>",
"bin/kc.[sh|bat] start --log-console-level=warn",
"bin/kc.[sh|bat] start --log=\"console,file\"",
"bin/kc.[sh|bat] start --log=\"console,file\" --log-file=<path-to>/<your-file.log>",
"bin/kc.[sh|bat] start --log-file-format=\"<pattern>\"",
"bin/kc.[sh|bat] start --log-file-level=warn",
"bin/kc.[sh|bat] start --log=\"console,syslog\"",
"bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-app-name=kc-p-itadmins",
"bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-endpoint=myhost:12345",
"bin/kc.[sh|bat] start --log-syslog-level=warn",
"bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-protocol=udp",
"bin/kc.[sh|bat] start --log-syslog-format=\"'<format>'\"",
"bin/kc.[sh|bat] start --log-syslog-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"",
"bin/kc.[sh|bat] start --log-syslog-type=rfc3164",
"bin/kc.[sh|bat] start --log-syslog-max-length=1536",
"bin/kc.[sh|bat] start --log-syslog-output=json",
"2024-04-05T12:32:20.616+02:00 host keycloak 2788276 io.quarkus - {\"timestamp\":\"2024-04-05T12:32:20.616208533+02:00\",\"sequence\":9948,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Profile prod activated. \",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host\",\"processName\":\"QuarkusEntryPoint\",\"processId\":2788276}",
"bin/kc.[sh|bat] start --log-syslog-output=default",
"2024-04-05T12:31:38.473+02:00 host keycloak 2787568 io.quarkus - 2024-04-05 12:31:38,473 INFO [io.quarkus] (main) Profile prod activated.",
"fips-mode-setup --check",
"fips-mode-setup --enable",
"keytool -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -keystore USDKEYCLOAK_HOME/conf/server.keystore -alias localhost -dname CN=localhost -keypass passwordpassword",
"securerandom.strongAlgorithms=PKCS11:SunPKCS11-NSS-FIPS",
"keytool -keystore USDKEYCLOAK_HOME/conf/server.keystore -storetype bcfks -providername BCFIPS -providerclass org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -providerpath USDKEYCLOAK_HOME/providers/bc-fips-*.jar -alias localhost -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -dname CN=localhost -keypass passwordpassword -J-Djava.security.properties=/tmp/kc.keystore-create.java.security",
"bin/kc.[sh|bat] start --features=fips --hostname=localhost --https-key-store-password=passwordpassword --log-level=INFO,org.keycloak.common.crypto:TRACE,org.keycloak.crypto:TRACE",
"KC(BCFIPS version 2.0 Approved Mode, FIPS-JVM: enabled) version 1.0 - class org.keycloak.crypto.fips.KeycloakFipsSecurityProvider,",
"--spi-password-hashing-pbkdf2-sha512-max-padding-length=14",
"fips.provider.7=XMLDSig",
"-Djava.security.properties=/location/to/your/file/kc.java.security",
"cp USDKEYCLOAK_HOME/providers/bc-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/ cp USDKEYCLOAK_HOME/providers/bctls-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/ cp USDKEYCLOAK_HOME/providers/bcutil-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/",
"echo \"keystore.type=bcfks fips.keystore.type=bcfks\" > /tmp/kcadm.java.security export KC_OPTS=\"-Djava.security.properties=/tmp/kcadm.java.security\"",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder ADD files /tmp/files/ WORKDIR /opt/keycloak RUN cp /tmp/files/*.jar /opt/keycloak/providers/ RUN cp /tmp/files/keycloak-fips.keystore.* /opt/keycloak/conf/server.keystore RUN cp /tmp/files/kc.java.security /opt/keycloak/conf/ RUN /opt/keycloak/bin/kc.sh build --features=fips --fips-mode=strict FROM registry.redhat.io/rhbk/keycloak-rhel9:26 COPY --from=builder /opt/keycloak/ /opt/keycloak/ ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]",
"{ \"status\": \"UP\", \"checks\": [] }",
"{ \"status\": \"UP\", \"checks\": [ { \"name\": \"Keycloak database connections health check\", \"status\": \"UP\" } ] }",
"bin/kc.[sh|bat] build --health-enabled=true",
"curl --head -fsS http://localhost:9000/health/ready",
"bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true",
"bin/kc.[sh|bat] start --metrics-enabled=true",
"HELP base_gc_total Displays the total number of collections that have occurred. This attribute lists -1 if the collection count is undefined for this collector. TYPE base_gc_total counter base_gc_total{name=\"G1 Young Generation\",} 14.0 HELP jvm_memory_usage_after_gc_percent The percentage of long-lived heap pool used after the last GC event, in the range [0..1] TYPE jvm_memory_usage_after_gc_percent gauge jvm_memory_usage_after_gc_percent{area=\"heap\",pool=\"long-lived\",} 0.0 HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset TYPE jvm_threads_peak_threads gauge jvm_threads_peak_threads 113.0 HELP agroal_active_count Number of active connections. These connections are in use and not available to be acquired. TYPE agroal_active_count gauge agroal_active_count{datasource=\"default\",} 0.0 HELP base_memory_maxHeap_bytes Displays the maximum amount of memory, in bytes, that can be used for memory management. TYPE base_memory_maxHeap_bytes gauge base_memory_maxHeap_bytes 1.6781410304E10 HELP process_start_time_seconds Start time of the process since unix epoch. TYPE process_start_time_seconds gauge process_start_time_seconds 1.675188449054E9 HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time TYPE system_load_average_1m gauge system_load_average_1m 4.005859375",
"bin/kc.[sh|bat] start --tracing-enabled=true --features=opentelemetry",
"run --name jaeger -p 16686:16686 -p 4317:4317 -p 4318:4318 jaegertracing/all-in-one",
"2024-08-05 15:27:07,144 traceId=b636ac4c665ceb901f7fdc3fc7e80154, parentId=d59cea113d0c2549, spanId=d59cea113d0c2549, sampled=true WARN [org.keycloak.events]",
"bin/kc.[sh|bat] start --tracing-enabled=true --features=opentelemetry --log=console --log-console-include-trace=false",
"bin/kc.[sh|bat] export --help",
"bin/kc.[sh|bat] export --dir <dir>",
"bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100",
"bin/kc.[sh|bat] export --file <file>",
"bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm",
"bin/kc.[sh|bat] import --help",
"bin/kc.[sh|bat] import --dir <dir>",
"bin/kc.[sh|bat] import --dir <dir> --override false",
"bin/kc.[sh|bat] import --file <file>",
"{ \"realm\": \"USD{MY_REALM_NAME}\", \"enabled\": true, }",
"bin/kc.[sh|bat] start --import-realm",
"bin/kc.[sh|bat] build --vault=file",
"bin/kc.[sh|bat] build --vault=keystore",
"bin/kc.[sh|bat] start --vault-dir=/my/path",
"USD{vault.<realmname>_<secretname>}",
"keytool -importpass -alias <realm-name>_<alias> -keystore keystore.p12 -storepass keystorepassword",
"bin/kc.[sh|bat] start --vault-file=/path/to/keystore.p12 --vault-pass=<value> --vault-type=<value>",
"sso__realm_ldap__credential"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html-single/server_configuration_guide//db- |
Deploying web servers and reverse proxies | Deploying web servers and reverse proxies Red Hat Enterprise Linux 9 Setting up and configuring web servers and reverse proxies in Red Hat Enterprise Linux 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_web_servers_and_reverse_proxies/index |
Introduction to the OpenStack Dashboard | Introduction to the OpenStack Dashboard Red Hat OpenStack Platform 16.2 An overview of the Red Hat OpenStack Platform Dashboard graphical user interface OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/introduction_to_the_openstack_dashboard/index |
Deploying a hyperconverged infrastructure environment | Deploying a hyperconverged infrastructure environment Red Hat OpenStack Services on OpenShift 18.0 Deploying a hyperconverged infrastructure (HCI) environment for Red Hat OpenStack Services on OpenShift OpenStack Documentation Team [email protected] | [
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: services: - configure-network - validate-network - install-os - configure-os - run-os - ovn - libvirt - nova",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: services: - download-cache - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: \"True\" networkAttachments: - ctlplane nodeTemplate: ansible: ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: edpm_ceph_hci_pre_enabled_services: - ceph_mon - ceph_mgr - ceph_osd - ceph_rgw - ceph_nfs - ceph_rgw_frontend - ceph_nfs_frontend edpm_fips_mode: check edpm_iscsid_image: {{ registry_url }}/openstack-iscsid:{{ image_tag }} edpm_logrotate_crond_image: {{ registry_url }}/openstack-cron:{{ image_tag }} edpm_network_config_hide_sensitive_logs: false edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:00:1e:af:6b nic2: 52:54:00:d9:cb:f4 edpm-compute-1: nic1: 52:54:00:f2:bc:af nic2: 52:54:00:f1:c7:dd edpm-compute-2: nic1: 52:54:00:dd:33:14 nic2: 52:54:00:50:fb:c3 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup( vars , networks_lower[network] ~ _mtu ) }} vlan_id: {{ lookup( vars , networks_lower[network] ~ _vlan_id ) }} addresses: - ip_netmask: {{ lookup( vars , networks_lower[network] ~ _ip ) }}/{{ lookup( vars , networks_lower[network] ~ _cidr ) }} routes: {{ lookup( vars , networks_lower[network] ~ _host_routes ) }} {% endfor %} edpm_neutron_metadata_agent_image: {{ registry_url }}/openstack-neutron-metadata-agent-ovn:{{ image_tag }} edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false edpm_selinux_mode: enforcing edpm_sshd_allowed_ranges: - 192.168.122.0/24 - 192.168.111.0/24 edpm_sshd_configure_firewall: true enable_debug: false gather_facts: false image_tag: current-podified neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 service_net_map: nova_api_network: internalapi nova_libvirt_network: internalapi storage_mgmt_cidr: \"24\" storage_mgmt_host_routes: [] storage_mgmt_mtu: 9000 storage_mgmt_vlan_id: 23 storage_mtu: 9000 timesync_ntp_servers: - hostname: pool.ntp.org ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret managementNetwork: ctlplane networks: - defaultRoute: true name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodes: edpm-compute-0: ansible: host: 192.168.122.100 hostName: compute-0 networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-compute-1: ansible: host: 192.168.122.101 hostName: compute-1 networks: - defaultRoute: true fixedIP: 192.168.122.101 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-compute-2: ansible: host: 192.168.122.102 hostName: compute-2 networks: - defaultRoute: true fixedIP: 192.168.122.102 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 preProvisioned: true services: - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os",
"oc apply -f <dataplane_cr_file>",
"ping -M do -s 8972 172.20.0.100",
"[global] public_network = 172.18.0.0/24 cluster_network = 172.20.0.0/24",
"[osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2",
"cephadm shell -- ceph -s",
"ceph config dump | grep numa ceph config dump | grep autotune ceph config dump | get mgr",
"ceph config get <osd_number> osd_memory_target ceph config get <osd_number> osd_memory_target_autotune ceph config get <osd_number> osd_numa_auto_affinity",
"ceph config get <osd_number> osd_recovery_op_priority ceph config get <osd_number> osd_max_backfills ceph config get <osd_number> osd_recovery_max_active_hdd ceph config get <osd_number> osd_recovery_max_active_ssd",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: services: - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ceph-client - ovn - neutron-metadata - libvirt - nova-custom-ceph",
"apiVersion: v1 kind: ConfigMap metadata: name: reserved-memory-nova data: 04-reserved-memory-nova.conf: | [DEFAULT] reserved_host_memory_mb=75000",
"kind: OpenStackDataPlaneService <...> spec: configMaps: - ceph-nova - reserved-memory-nova",
"oc apply -f <dataplane_cr_file>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html-single/deploying_a_hyperconverged_infrastructure_environment/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.21/making-open-source-more-inclusive |
Chapter 13. Troubleshooting builds | Chapter 13. Troubleshooting builds Use the following to troubleshoot build issues. 13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num . To clear the annotations, enter the following commands: USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing an annotation has a - after the annotation name to be removed. | [
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/builds_using_buildconfig/troubleshooting-builds_build-configuration |
Chapter 2. Data Grid deployment planning | Chapter 2. Data Grid deployment planning To get the best performance for your Data Grid deployment, you should do the following things: Calculate the size of your data set. Determine what type of clustered cache mode best suits your use case and requirements. Understand performance trade-offs and considerations for Data Grid capabilities that provide fault tolerance and consistency guarantees. 2.1. Performance metric considerations Data Grid includes so many configurable combinations that determining a single formula for performance metrics that covers all use cases is not possible. The purpose of the Data Grid Performance and Sizing Guide document is to provide details about use cases and architectures that can help you determine requirements for your Data Grid deployment. Additionally, consider the following inter-related factors that apply to Data Grid: Available CPU and memory resources in cloud environments Caches used in parallel Get, put, query balancing Peak load and throughput limitations Querying limitations with data set Number of entries per cache Size of cache entries Given the number of different combinations and unknown external factors, providing a performance calculation that meets all Data Grid use cases is not possible. You cannot compare one performance test to another test if any of the previously listed factors are different. You can run basic performance tests with the Data Grid CLI that collects limited performance metrics. You can customize the performance test so that the test outputs results that might meet your needs. Test results provide baseline metrics that can help you determine settings and resources for your Data Grid caching requirements. Measure the performance of your current settings and check if they meet your requirements. If your needs are not met, optimize the settings and then re-measure their performance. 2.2. How to calculate the size of your data set Planning a Data Grid deployment involves calculating the size of your data set then figuring out the correct number of nodes and amount of RAM to hold the data set. You can roughly estimate the total size of your data set with this formula: Note With remote caches you need to calculate key sizes and value sizes in their marshalled forms. Data set size in distributed caches Distributed caches require some additional calculation to determine the data set size. In normal operating conditions, distributed caches store a number of copies for each key/value entry that is equal to the Number of owners that you configure. During cluster rebalancing operations, some entries have an extra copy, so you should calculate Number of owners + 1 to allow for that scenario. You can use the following formula to adjust the estimate of your data set size for distributed caches: Calculating available memory for distributed caches Distributed caches allow you to increase the data set size either by adding more nodes or by increasing the amount of available memory per node. Adjusting for node loss tolerance Even if you plan to have a fixed number of nodes in the cluster, you should take into account the fact that not all nodes will be in the cluster all the time. Distributed caches tolerate the loss of Number of owners - 1 nodes without losing data so you can allocate that many extra node in addition to the minimum number of nodes that you need to fit your data set. For example, you plan to store one million entries that are 10KB each in size and configure three owners per entry for availability. If you plan to allocate 4GB of RAM for each node in the cluster, you can then use the following formula to determine the number of nodes that you need for your data set: 2.2.1. Memory overhead Memory overhead is additional memory that Data Grid uses to store entries. An approximate estimate for memory overhead is 200 bytes per entry in JVM heap memory or 60 bytes per entry in off-heap memory. It is impossible to determine a precise amount of memory overhead upfront, however, because the overhead that Data Grid adds per entry depends on several factors. For example, bounding the data container with eviction results in Data Grid using additional memory to keep track of entries. Likewise configuring expiration adds timestamps metadata to each entry. The only way to find any kind of exact amount of memory overhead involves JVM heap dump analysis. Of course JVM heap dumps provide no information for entries that you store in off-heap memory but memory overhead is much lower for off-heap memory than JVM heap memory. Additional memory usage In addition to the memory overhead that Data Grid imposes per entry, processes such as rebalancing and indexing can increase overall memory usage. Rebalancing operations for clusters when nodes join and leave also temporarily require some extra capacity to prevent data loss while replicating entries between cluster members. 2.2.2. JVM heap space allocation Determine the volume of memory that you require for your Data Grid deployment, so that you have enough data storage capacity to meet your needs. Important Allocating a large memory heap size in combination with setting garbage collection (GC) times might impact the performance of your Data Grid deployment in the following ways: If a JVM handles only one thread, the GC might block the thread and reduce the JVM's performance. The GC might operate ahead of the deployment. This asynchronous behavior might cause large GC pauses. If CPU resources are low and GC operates synchronously with the deployment, GC might require more frequent runs that can degrade your deployment's performance. The following table outlines two examples of allocating JVM heap space for data storage. These examples represent safe estimates for deploying a cluster. Cache operations only, such as read, write, and delete operations. Allocate 50% of JVM heap space for data storage Cache operations and data processing, such as queries and cache event listeners. Allocate 33% of JVM heap space for data storage Note Depending on pattern changes and usage for data storage, you might consider setting a different percentage for JVM heap space than any of the suggested safe estimates. Consider setting a safe estimate before you start your Data Grid deployment. After you start your deployment, check the performance of your JVM and the occupancy of heap space. You might need to re-adjust JVM heap space when data usage and throughput for your JVM significantly increases. The safe estimates were calculated on the assumption that the following common operations were running inside a JVM. The list is not exhaustive, and you might set one of these safe estimates with the purpose of performing additional operations. Data Grid converts objects in serialized form to key-value pairs. Data Grid adds the pairs to caches and persistent storage. Data Grid encrypts and decrypts caches from remote connections to clients. Data Grid performs regular querying of caches to collect data. Data Grid strategically divides data into segments to ensure efficient distribution of data among clusters, even during a state transfer operation. GC performs more frequent garbage collections, because the JVM allocated large volumes of memory for GC operations. GC dynamically manages and monitors data objects in JVM heap space to ensure safe removal of unused objects. Consider the following factors when allocating JVM heap space for data storage, and when determining the volume of memory and CPU requirements for your Data Grid deployment: Clustered cache mode. Number of segments. For example, a low number of segments might affect how a server distributes data among nodes. Read or write operations. Rebalancing requirements. For example, a high number of threads might quickly run in parallel during a state transfer, but each thread operation might use more memory. Scaling clusters. Synchronous or asynchronous replication. Most notable Data Grid operations that require high CPU resources include rebalancing nodes after pod restarts, running indexing queries on data, and performing GC operations. Off-heap storage Data Grid uses JVM heap representations of objects to process read and write operations on caches or perform other operations, such as a state transfer operation. You must always allocate some JVM heap space to Data Grid, even if you store entries in off-heap memory. The volume of JVM heap memory that Data Grid uses with off-heap storage is much smaller when compared with storing data in the JVM heap space. The JVM heap memory requirements for off-heap storage scales with the number of concurrent operations as against the number of stored entries. Data Grid uses topology caches to provide clients with a cluster view. If you receive any OutOfMemoryError exceptions from your Data Grid cluster, consider the options: Disable the state transfer operation, which might results in data loss if a node joins or leaves a cluster. Recalculate the JVM heap space by factoring in the key size and the number of nodes and segments. Use more nodes to better manage memory consumption for your cluster. Use a single node, because this might use less memory. However, consider the impact if you want to scale your cluster to its original size. 2.3. Clustered cache modes You can configure clustered Data Grid caches as replicated or distributed. Distributed caches Maximize capacity by creating fewer copies of each entry across the cluster. Replicated caches Provide redundancy by creating a copy of all entries on each node in the cluster. Reads:Writes Consider whether your applications perform more write operations or more read operations. In general, distributed caches offer the best performance for writes while replicated caches offer the best performance for reads. To put k1 in a distributed cache on a cluster of three nodes with two owners, Data Grid writes k1 twice. The same operation in a replicated cache means Data Grid writes k1 three times. The amount of additional network traffic for each write to a replicated cache is equal to the number of nodes in the cluster. A replicated cache on a cluster of ten nodes results in a tenfold increase in traffic for writes and so on. You can minimize traffic by using a UDP stack with multicasting for cluster transport. To get k1 from a replicated cache, each node can perform the read operation locally. Whereas, to get k1 from a distributed cache, the node that handles the operation might need to retrieve the key from a different node in the cluster, which results in an extra network hop and increases the time for the read operation to complete. Client intelligence and near-caching Data Grid uses consistent hashing techniques to make Hot Rod clients topology-aware and avoid extra network hops, which means read operations have the same performance for distributed caches as they do for replicated caches. Hot Rod clients can also use near-caching capabilities to keep frequently accessed entries in local memory and avoid repeated reads. Tip Distributed caches are the best choice for most Data Grid Server deployments. You get the best possible performance for read and write operations along with elasticity for cluster scaling. Data guarantees Because each node contains all entries, replicated caches provide more protection against data loss than distributed caches. On a cluster of three nodes, two nodes can crash and you do not lose data from a replicated cache. In that same scenario, a distributed cache with two owners would lose data. To avoid data loss with distributed caches, you can increase the number of replicas across the cluster by configuring more owners for each entry with either the owners attribute declaratively or the numOwners() method programmatically. Rebalancing operations when node failure occurs Rebalancing operations after node failure can impact performance and capacity. When a node leaves the cluster, Data Grid replicates cache entries among the remaining members to restore the configured number of owners. This rebalancing operation is temporary, but the increased cluster traffic has a negative impact on performance. Performance degradation is greater the more nodes leave. The nodes left in the cluster might not have enough capacity to keep all data in memory when too many nodes leave. Cluster scaling Data Grid clusters scale horizontally as your workloads demand to more efficiently use compute resources like CPU and memory. To take the most advantage of this elasticity, you should consider how scaling the number of nodes up or down affects cache capacity. For replicated caches, each time a node joins the cluster, it receives a complete copy of the data set. Replicating all entries to each node increases the time it takes for nodes to join and imposes a limit on overall capacity. Replicated caches can never exceed the amount of memory available to the host. For example, if the size of your data set is 10 GB, each node must have at least 10 GB of available memory. For distributed caches, adding more nodes increases capacity because each member of the cluster stores only a subset of the data. To store 10 GB of data, you can have eight nodes each with 5 GB of available memory if the number of owners is two, without taking memory overhead into consideration. Each additional node that joins the cluster increases the capacity of the distributed cache by 5 GB. The capacity of a distributed cache is not bound by the amount of memory available to underlying hosts. Synchronous or asynchronous replication Data Grid can communicate synchronously or asynchronously when primary owners send replication requests to backup nodes. Replication mode Effect on performance Synchronous Synchronous replication helps to keep your data consistent but adds latency to cluster traffic that reduces throughput for cache writes. Asynchronous Asynchronous replication reduces latency and increases the speed of write operations but leads to data inconsistency and provides a lower guarantee against data loss. With synchronous replication, Data Grid notifies the originating node when replication requests complete on backup nodes. Data Grid retries the operation if a replication request fails due to a change to the cluster topology. When replication requests fail due to other errors, Data Grid throws exceptions for client applications. With asynchronous replication, Data Grid does not provide any confirmation for replication requests. This has the same effect for applications as all requests being successful. On the Data Grid cluster, however, the primary owner has the correct entry and Data Grid replicates it to backup nodes at some point in the future. In the case that the primary owner crashes then backup nodes might not have a copy of the entry or they might have an out of date copy. Cluster topology changes can also lead to data inconsistency with asynchronous replication. For example, consider a Data Grid cluster that has multiple primary owners. Due to a network error or some other issue, one or more of the primary owners leaves the cluster unexpectedly so Data Grid updates which nodes are the primary owners for which segments. When this occurs, it is theoretically possible for some nodes to use the old cluster topology and some nodes to use the updated topology. With asynchronous communication, this might lead to a short time where Data Grid processes replication requests from the topology and applies older values from write operations. However, Data Grid can detect node crashes and update cluster topology changes quickly enough that this scenario is not likely to affect many write operations. Using asynchronous replication does not guarantee improved throughput for writes, because asynchronous replication limits the number of backup writes that a node can handle at any time to the number of possible senders (via JGroups per-sender ordering). Synchronous replication allows nodes to handle more incoming write operations at the same time, which in certain configurations might compensate for the fact that individual operations take longer to complete, giving you a higher total throughput. When a node sends multiple requests to replicate entries, JGroups sends the messages to the rest of the nodes in the cluster one at a time, which results in there being only one replication request per originating node. This means that Data Grid nodes can process, in parallel with other write operations, one write from each other node in the cluster. Data Grid uses a JGroups flow control protocol in the cluster transport layer to handle replication requests to backup nodes. If the number of unconfirmed replication requests exceeds the flow control threshold, set with the max_credits attribute (4MB by default), write operations are blocked on the originator node. This applies to both synchronous and asynchronous replication. Number of segments Data Grid divides data into segments to distribute data evenly across clusters. Even distribution of segments avoids overloading individual nodes and makes cluster re-balancing operations more efficient. Data Grid creates 256 hash space segments per cluster by default. For deployments with up to 20 nodes per cluster, this number of segments is ideal and should not change. For deployments with greater than 20 nodes per cluster, increasing the number of segments increases the granularity of your data so Data Grid can distribute it across the cluster more efficiently. Use the following formula to calculate approximately how many segments you should configure: For example, with a cluster of 30 nodes you should configure 600 segments. Adding more segments for larger clusters is generally a good idea, though, and this formula should provide you with a rough idea of the number that is right for your deployment. Changing the number of segments Data Grid creates requires a full cluster restart. If you use persistent storage you might also need to use the StoreMigrator utility to change the number of segments, depending on the cache store implementation. Changing the number of segments can also lead to data corruption so you should do so with caution and based on metrics that you gather from benchmarking and performance monitoring. Note Data Grid always segments data that it stores in memory. When you configure cache stores, Data Grid does not always segment data in persistent storage. It depends on the cache store implementation but, whenever possible you should enable segmentation for a cache store. Segmented cache stores improve Data Grid performance when iterating over data in persistent storage. For example, with RocksDB and JDBC-string based cache stores, segmentation reduces the number of objects that Data Grid needs to retrieve from the database. 2.4. Strategies to manage stale data If Data Grid is not the primary source of data, embedded and remote caches are stale by nature. While planning, benchmarking, and tuning your Data Grid deployment, choose the appropriate level of cache staleness for your applications. Choose a level that allows you to make the best use of available RAM and avoid cache misses. If Data Grid does not have the entry in memory, then calls go to the primary store when applications send read and write requests. Cache misses increase the latency of reads and writes but, in many cases, calls to the primary store are more costly than the performance penalty to Data Grid. One example of this is offloading relational database management systems (RDBMS) to Data Grid clusters. Deploying Data Grid in this way greatly reduces the financial cost of operating traditional databases so tolerating a higher degree of stale entries in caches makes sense. With Data Grid you can configure maximum idle and lifespan values for entries to maintain an acceptable level of cache staleness. Expiration Controls how long Data Grid keeps entries in a cache and takes effect across clusters. Higher expiration values mean that entries remain in memory for longer, which increases the likelihood that read operations return stale values. Lower expiration values mean that there are less stale values in the cache but the likelihood of cache misses is greater. To carry out expiration, Data Grid creates a reaper from the existing thread pool. The main performance consideration with the thread is configuring the right interval between expiration runs. Shorter intervals perform more frequent expiration but use more threads. Additionally, with maximum idle expiration, you can control how Data Grid updates timestamp metadata across clusters. Data Grid sends touch commands to coordinate maximum idle expiration across nodes synchronously or asynchronously. With synchronous replication, you can choose either "sync" or "async" touch commands depending on whether you prefer consistency or speed. 2.5. JVM memory management with eviction RAM is a costly resource and usually limited in availability. Data Grid lets you manage memory usage to give priority to frequently used data by removing entries from memory. Eviction Controls the amount of data that Data Grid keeps in memory and takes effect for each node. Eviction bounds Data Grid caches by: Total number of entries, a maximum count. Amount of JVM memory, a maximum size. Important Data Grid evicts entries on a per-node basis. Because not all nodes evict the same entries you should use eviction with persistent storage to avoid data inconsistency. The impact to performance from eviction comes from the additional processing that Data Grid needs to calculate when the size of a cache reaches the configured threshold. Eviction can also slow down read operations. For example, if a read operation retrieves an entry from a cache store, Data Grid brings that entry into memory and then evicts another entry. This eviction process can include writing the newly evicted entry to the cache store, if using passivation. When this happens, the read operation does not return the value until the eviction process is complete. 2.6. JVM heap and off-heap memory Data Grid stores cache entries in JVM heap memory by default. You can configure Data Grid to use off-heap storage, which means that your data occupies native memory outside the managed JVM memory space. The following diagram is a simplified illustration of the memory space for a JVM process where Data Grid is running: Figure 2.1. JVM memory space JVM heap memory The heap is divided into young and old generations that help keep referenced Java objects and other application data in memory. The GC process reclaims space from unreachable objects, running more frequently on the young generation memory pool. When Data Grid stores cache entries in JVM heap memory, GC runs can take longer to complete as you start adding data to your caches. Because GC is an intensive process, longer and more frequent runs can degrade application performance. Off-heap memory Off-heap memory is native available system memory outside JVM memory management. The JVM memory space diagram shows the Metaspace memory pool that holds class metadata and is allocated from native memory. The diagram also represents a section of native memory that holds Data Grid cache entries. Off-heap memory: Uses less memory per entry. Improves overall JVM performance by avoiding Garbage Collector (GC) runs. One disadvantage, however, is that JVM heap dumps do not show entries stored in off-heap memory. 2.6.1. Off-heap data storage When you add entries to off-heap caches, Data Grid dynamically allocates native memory to your data. Data Grid hashes the serialized byte[] for each key into buckets that are similar to a standard Java HashMap . Buckets include address pointers that Data Grid uses to locate entries that you store in off-heap memory. Important Even though Data Grid stores cache entries in native memory, run-time operations require JVM heap representations of those objects. For instance, cache.get() operations read objects into heap memory before returning. Likewise, state transfer operations hold subsets of objects in heap memory while they take place. Object equality Data Grid determines equality of Java objects in off-heap storage using the serialized byte[] representation of each object instead of the object instance. Data consistency Data Grid uses an array of locks to protect off-heap address spaces. The number of locks is twice the number of cores and then rounded to the nearest power of two. This ensures that there is an even distribution of ReadWriteLock instances to prevent write operations from blocking read operations. 2.7. Persistent storage Configuring Data Grid to interact with a persistent data source greatly impacts performance. This performance penalty comes from the fact that more traditional data sources are inherently slower than in-memory caches. Read and write operations will always take longer when the call goes outside the JVM. Depending on how you use cache stores, though, the reduction of Data Grid performance is offset by the performance boost that in-memory data provides over accessing data in persistent storage. Configuring Data Grid deployments with persistent storage also gives other benefits, such as allowing you to preserve state for graceful cluster shutdowns. You can also overflow data from your caches to persistent storage and gain capacity beyond what is available in memory only. For example, you can have ten million entries in total while keeping only two million of them in memory. Data Grid adds key/value pairs to caches and persistent storage in either write-through mode or write-behind mode. Because these writing modes have different impacts on performance, you must consider them when planning a Data Grid deployment. Writing mode Effect on performance Write-through Data Grid writes data to the cache and persistent storage simultaneously, which increases consistency and avoids data loss that can result from node failure. The downside to write-through mode is that synchronous writes add latency and decrease throughput. Cache.put() calls result in application threads waiting until writes to persistent storage complete. Write-behind Data Grid synchronously writes data to the cache but then adds the modification to a queue so that the write to persistent storage happens asynchronously, which decreases consistency but reduces latency of write operations. When the cache store cannot handle the number of write operations, Data Grid delays new writes until the number of pending write operations goes below the configured modification queue size, in a similar way to write-through. If the store is normally fast enough but latency spikes occur during bursts of cache writes, you can increase the modification queue size to contain the bursts and reduce the latency. Passivation Enabling passivation configures Data Grid to write entries to persistent storage only when it evicts them from memory. Passivation also implies activation. Performing a read or write on a key brings that key back into memory and removes it from persistent storage. Removing keys from persistent storage during activation does not block the read or write operation, but it does increase load on the external store. Passivation and activation can potentially result in Data Grid performing multiple calls to persistent storage for a given entry in the cache. For example, if an entry is not available in memory, Data Grid brings it back into memory which is one read operation and a delete operation to remove it from persistent storage. Additionally, if the cache has reached the size limit, then Data Grid performs another write operation to passivate a newly evicted entry. Pre-loading caches with data Another aspect of persistent storage that can affect Data Grid cluster performance is pre-loading caches. This capability populates your caches with data when Data Grid clusters start so they are "warm" and can handle reads and writes straight away. Pre-loading caches can slow down Data Grid cluster start times and result in out of memory exceptions if the amount of data in persistent storage is greater than the amount of available RAM. 2.8. Cluster security Protecting your data and preventing network intrusion is one of the most important aspect of deployment planning. Sensitive customer details leaking to the open internet or data breaches that allow hackers to publicly expose confidential information have devastating impacts on business reputation. With this in mind you need a robust security strategy to authenticate users and encrypt network communication. But what are the costs to the performance of your Data Grid deployment? How should you approach these considerations during planning? Authentication The performance cost of validating user credentials depends on the mechanism and protocol. Data Grid validates credentials once per user over Hot Rod while potentially for every request over HTTP. Table 2.1. Authentication mechanisms SASL mechanism HTTP mechanism Performance impact PLAIN BASIC While PLAIN and BASIC are the fastest authentication mechanisms, they are also the least secure. You should only ever use PLAIN or BASIC in combination with TLS/SSL encryption. DIGEST and SCRAM DIGEST For both Hot Rod and HTTP requests, the DIGEST scheme uses MD5 hashing algorithms to hash credentials so they are not transmitted in plain text. If you do not enable TLS/SSL encryption then using DIGEST is overall less resource intensive than PLAIN or BASIC with encryption but not as secure because DIGEST is vulnerable to monkey-in-the-middle (MITM) attacks and other intrusions. For Hot Rod endpoints, the SCRAM scheme is similar to DIGEST with extra levels of protection that increase security but require additional processing that take longer to complete. GSSAPI / GS2-KRB5 SPNEGO A Kerberos server, Key Distribution Center (KDC), handles authentication and issues tokens to users. Data Grid performance benefits from the fact that a separate system handles user authentication operations. However these mechanisms can lead to network bottlenecks depending on the performance of the KDC service itself. OAUTHBEARER BEARER_TOKEN Federated identity providers that implement the OAuth standard for issuing temporary access tokens to Data Grid users. Users authenticate with an identity service instead of directly authenticating to Data Grid, passing the access token as a request header instead. Compared to handling authentication directly, there is a lower performance penalty for Data Grid to validate user access tokens. Similarly to a KDC, actual performance implications depend on the quality of service for the identity provider itself. EXTERNAL CLIENT_CERT You can provide trust stores to Data Grid Server so that it authenticates inbound connections by comparing certificates that clients present against the trust stores. If the trust store contains only the signing certificate, which is typically a Certificate Authority (CA), any client that presents a certificate signed by the CA can connect to Data Grid. This offers lower security and is vulnerable to MITM attacks but is faster than authenticating the public certificate of each client. If the trust store contains all client certificates in addition to the signing certificate, only those clients that present a signed certificate that is present in the trust store can connect to Data Grid. In this case Data Grid compares the common Common Name (CN) from the certificate that the client presents with the trust store in addition to verifying that the certificate is signed, adding more overhead. Encryption Encrypting cluster transport secures data as it passes between nodes and protects your Data Grid deployment from MITM attacks. Nodes perform TLS/SSL handshakes when joining the cluster which carries a slight performance penalty and increased latency with additional round trips. However, once each node establishes a connection it stays up forever assuming connections never go idle. For remote caches, Data Grid Server can also encrypt network communication with clients. In terms of performance the effect of TLS/SSL connections between clients and remote caches is the same. Negotiating secure connections takes longer and requires some additional work but, once the connections are established latency from encryption is not a concern for Data Grid performance. Apart from using TLSv1.3, the only means of offsetting performance loss from encryption are to configure the JVM on which Data Grid runs. For instance using OpenSSL libraries instead of standard Java encryption provides more efficient handling with results up to 20% faster. Authorization Role-based access control (RBAC) lets you restrict operations on data, offering additional security to your deployments. RBAC is the best way to implement a policy of least privilege for user access to data distributed across Data Grid clusters. Data Grid users must have a sufficient level of authorization to read, create, modify, or remove data from caches. Adding another layer of security to protect your data will always carry a performance cost. Authorization adds some latency to operations because Data Grid validates each one against an Access Control List (ACL) before allowing users to manipulate data. However the overall impact to performance from authorization is much lower than encryption so the cost to benefit generally balances out. 2.9. Client listeners Client listeners provide notifications whenever data is added, removed, or modified on your Data Grid cluster. As an example, the following implementation triggers an event whenever temperatures change at a given location: @ClientListener public class TemperatureChangesListener { private String location; TemperatureChangesListener(String location) { this.location = location; } @ClientCacheEntryCreated public void created(ClientCacheEntryCreatedEvent event) { if(event.getKey().equals(location)) { cache.getAsync(location) .whenComplete((temperature, ex) -> System.out.printf(">> Location %s Temperature %s", location, temperature)); } } } Adding listeners to Data Grid clusters adds performance considerations for your deployment. For embedded caches, listeners use the same CPU cores as Data Grid. Listeners that receive many events and use a lot of CPU to process those events reduce the CPU available to Data Grid and slow down all other operations. For remote caches, Data Grid Server uses an internal process to trigger client notifications. Data Grid Server sends the event from the primary owner node to the node where the listener is registered before sending it to the client. Data Grid Server also includes a backpressure mechanism that delays write operations to caches if client listeners process events too slowly. Filtering listener events If listeners are invoked on every write operation, Data Grid can generate a high number of events, creating network traffic both inside the cluster and to external clients. It all depends on how many clients are registered with each listener, the type of events they trigger, and how data changes on your Data Grid cluster. As an example with remote caches, if you have ten clients registered with a listener that emits 10 events, Data Grid Server sends 100 events in total across the network. You can provide Data Grid Server with custom filters to reduce traffic to clients. Filters allow Data Grid Server to first process events and determine whether to forward them to clients. Continuous queries and listeners Continuous queries allow you to receive events for matching entries and offers an alternative to deploying client listeners and filtering listener events. Of course queries have additional processing costs that you need to take into account but, if you already index caches and perform queries, using a continuous query instead of a client listener could be worthwhile. 2.10. Indexing and querying caches Querying Data Grid caches lets you analyze and filter data to gain real-time insights. As an example, consider an online game where players compete against each other in some way to score points. If you wanted to implement a leaderboard with the top ten players at any one time, you could create a query to find out which players have the most points at any one time and limit the result to a maximum of ten as follows: QueryFactory queryFactory = Search.getQueryFactory(playersScores); Query topTenQuery = queryFactory .create("from com.redhat.PlayerScore ORDER BY p.score DESC, p.timestamp ASC") .maxResults(10); List<PlayerScore> topTen = topTenQuery.execute().list(); The preceding example illustrates the benefit of using queries because it lets you find ten entries that match a criteria out of potentially millions of cache entries. In terms of performance impact, though, you should consider the tradeoffs that come with indexing operations versus query operations. Configuring Data Grid to index caches results in much faster queries. Without indexes, queries must scroll through all data in the cache, slowing down results by orders of magnitude depending on the type and amount of data. There is a measurable loss of performance for writes when indexing is enabled. However, with some careful planning and a good understanding of what you want to index, you can avoid the worst effects. The most effective approach is to configure Data Grid to index only the fields that you need. Whether you store Plain Old Java Objects (POJOs) or use Protobuf schema, the more fields that you annotate, the longer it takes Data Grid to build the index. If you have a POJO with five fields but you only need to query two of those fields, do not configure Data Grid to index the three fields you don't need. Data Grid gives you several options to tune indexing operations. For instance Data Grid stores indexes differently to data, saving indexes to disk instead of memory. Data Grid keeps the index synchronized with the cache using an index writer, whenever an entry is added, modified or deleted. If you enable indexing and then observe slower writes, and think indexing causes the loss of performance, you can keep indexes in a memory buffer for longer periods of time before writing to disk. This results in faster indexing operations, and helps mitigate degradation of write throughput, but consumes more memory. For most deployments, though, the default indexing configuration is suitable and does not slow down writes too much. In some scenarios it might be sensible not to index your caches, such as for write-heavy caches that you need to query infrequently and don't need results in milliseconds. It all depends on what you want to achieve. Faster queries means faster reads but comes at the expense of slower writes that come with indexing. You can improve performance of indexed queries by setting properly the maxResults and the hit-count-accuracy values. Additional resources Querying Data Grid caches 2.10.1. Continuous queries and Data Grid performance Continuous queries provide a constant stream of updates to applications, which can generate a significant number of events. Data Grid temporarily allocates memory for each event it generates, which can result in memory pressure and potentially lead to OutOfMemoryError exceptions, especially for remote caches. For this reason, you should carefully design your continuous queries to avoid any performance impact. Data Grid strongly recommends that you limit the scope of your continuous queries to the smallest amount of information that you need. To achieve this, you can use projections and predicates. For example, the following statement provides results about only a subset of fields that match the criteria rather than the entire entry: SELECT field1, field2 FROM Entity WHERE x AND y It is also important to ensure that each ContinuousQueryListener you create can quickly process all received events without blocking threads. To achieve this, you should avoid any cache operations that generate events unnecessarily. 2.11. Data consistency Data that resides on a distributed system is vulnerable to errors that can arise from temporary network outages, system failures, or just simple human error. These external factors are uncontrollable but can have serious consequences for quality of your data. The effects of data corruption range from lower customer satisfaction to costly system reconciliation that results in service unavailability. Data Grid can carry out ACID (atomic, consistent, isolated, durable) transactions to ensure the cache state is consistent. Transactions are a sequence of operations that Data Grid caries out as a single operation. Either all write operations in a transaction complete successfully or they all fail. In this way, the transaction either modifies the cache state in a consistent way, providing a history of reads and writes, or it does not modify cache state at all. The main performance concern for enabling transactions is finding the balance between having a more consistent data set and increasing latency that degrades write throughput. Write locks with transactions Configuring the wrong locking mode can negatively affect the performance of your transactions. The right locking mode depends on whether your Data Grid deployment has a high or low rate of contention for keys. For workloads with low rates of contention, where two or more transactions are not likely to write to the same key simultaneously, optimistic locking offers the best performance. Data Grid acquires write locks on keys before transactions commit. If there is contention for keys, the time it takes to acquire locks can delay commits. Additionally, if Data Grid detects conflicting writes, then it rolls the transaction back and the application must retry it, increasing latency. For workloads with high rates of contention, pessimistic locking provides the best performance. Data Grid acquires write locks on keys when applications access them to ensure no other transaction can modify the keys. Transaction commits complete in a single phase because keys are already locked. Pessimistic locking with multiple key transactions results in Data Grid locking keys for longer periods of time, which can decrease write throughput. Read isolation Isolation levels do not impact Data Grid performance considerations except for optimistic locking with REPEATABLE_READ . With this combination, Data Grid checks for write skews to detect conflicts, which can result in longer transaction commit phases. Data Grid also uses version metadata to detect conflicting write operations, which can increase the amount of memory per entry and generate additional network traffic for the cluster. Transaction recovery and partition handling If networks become unstable due to partitions or other issues, Data Grid can mark transactions as "in-doubt". When this happens Data Grid retains write locks that it acquires until the network stabilizes and the cluster returns to a healthy operational state. In some cases it might be necessary for a system administrator to manually complete any "in-doubt" transactions. 2.12. Network partitions and degraded clusters Data Grid clusters can encounter split brain scenarios where subsets of nodes in the cluster become isolated from each other and communication between nodes becomes disjointed. When this happens, Data Grid caches in minority partitions enter DEGRADED mode while caches in majority partitions remain available. Note Garbage collection (GC) pauses are the most common cause of network partitions. When GC pauses result in nodes becoming unresponsive, Data Grid clusters can start operating in a split brain network. Rather than dealing with network partitions, try to avoid GC pauses by controlling JVM heap usage and by using a modern, low-pause GC implementation such as Shenandoah with OpenJDK. CAP theorem and partition handling strategies CAP theorem expresses a limitation of distributed, key/value data stores, such as Data Grid. When network partition events happen, you must choose between consistency or availability while Data Grid heals the partition and resolves any conflicting entries. Availability Allow read and write operations. Consistency Deny read and write operations. Data Grid can also allow reads only while joining clusters back together. This strategy is a more balanced option of consistency by denying writes to entries and availability by allowing applications to access (potentially stale) data. Removing partitions As part of the process of joining the cluster back together and returning to normal operations, Data Grid resolves conflicting entries according to a merge policy. By default Data Grid does not attempt to resolve conflicts on merge which means clusters return to a healthy state sooner and there is no performance penalty beyond normal cluster rebalancing. However, in this case, data in the cache is much more likely to be inconsistent. If you configure a merge policy then it takes much longer for Data Grid to heal partitions. Configuring a merge policy results in Data Grid retrieving every version of an entry from each cache and then resolving any conflicts as follows: PREFERRED_ALWAYS Data Grid finds the value that exists on the majority of nodes in the cluster and applies it, which can restore out of date values. PREFERRED_NON_NULL Data Grid applies the first non-null value that it finds on the cluster, which can restore out of date values. REMOVE_ALL Data Grid removes any entries that have conflicting values. 2.12.1. Garbage collection and partition handling Long garbage collection (GC) times can increase the amount of time it takes Data Grid to detect network partitions. In some cases, GC can cause Data Grid to exceed the maximum time to detect a split. Additionally, when merging partitions after a split, Data Grid attempts to confirm all nodes are present in the cluster. Because no timeout or upper bound applies to the response time from nodes, the operation to merge the cluster view can be delayed. This can result from network issues as well as long GC times. Another scenario in which GC can impact performance through partition handling is when GC suspends the JVM, causing one or more nodes to leave the cluster. When this occurs, and suspended nodes resume after GC completes, the nodes can have out of date or conflicting cluster topologies. If a merge policy is configured, Data Grid attempts to resolve conflicts before merging the nodes. However, the merge policy is used only if the nodes have incompatible consistent hashes. Two consistent hashes are compatible if they have at least one common owner for each segment or incompatible if they have no common owner for at least one segment. When nodes have old, but compatible, consistent hashes, Data Grid ignores the out of date cluster topology and does not attempt to resolve conflicts. For example, if one node in the cluster is suspended due to garbage collection (GC), other nodes in the cluster remove it from the consistent hash and replace it with new owner nodes. If numOwners > 1 , the old consistent hash and the new consistent hash have a common owner for every key, which makes them compatible and allows Data Grid to skip the conflict resolution process. 2.13. Cluster backups and disaster recovery Data Grid clusters that perform cross-site replication are typically "symmetrical" in terms of overall CPU and memory allocation. When you take cross-site replication into account for sizing, the primary concern is the impact of state transfer operations between clusters. For example, a Data Grid cluster in NYC goes offline and clients switch to a Data Grid cluster in LON. When the cluster in NYC comes back online, state transfer occurs from LON to NYC. This operation prevents stale reads from clients but has a performance penalty for the cluster that receives state transfer. You can distribute the increase in processing that state transfer operations require across the cluster. However the performance impact from state transfer operations depends entirely on the environment and factors such as the type and size of the data set. Conflict resolution for Active/Active deployments Data Grid detects conflicts with concurrent write operations when multiple sites handle client requests, known as an Active/Active site configuration. The following example illustrates how concurrent writes result in a conflicting entry for Data Grid clusters running in the LON and NYC data centers: In an Active/Active site configuration, you should never use the synchronous backup strategy because concurrent writes result in deadlocks and you lose data. With the asynchronous backup strategy ( strategy=async ), Data Grid gives you a choice cross-site merge policies for handling concurrent writes. In terms of performance, merge policies that Data Grid uses to resolve conflicts do require additional computation but generally do not incur a significant penalty. For instance the default cross-site merge policy uses a lexicographic comparison, or "string comparison", that only takes a couple of nanoseconds to complete. Data Grid also provides a XSiteEntryMergePolicy SPI for cross-site merge policies. If you do configure Data Grid to resolve conflicts with a custom implementation you should always monitor performance to gauge any adverse effects. Note The XSiteEntryMergePolicy SPI invokes all merge policies in the non-blocking thread pool. If you implement a blocking custom merge policy, it can exhaust the thread pool. You should delegate complex or blocking policies to a different thread and your implementation should return a CompletionStage that completes when the merge policy is done in the other thread. 2.14. Code execution and data processing One of the benefits of distributed caching is that you can leverage compute resources from each host to perform large scale data processing more efficiently. By executing your processing logic directly on Data Grid you spread the workload across multiple JVM instances. Your code also runs in the same memory space where Data Grid stores your data, meaning that you can iterate over entries much faster. In terms of performance impact to your Data Grid deployment, that entirely depends on your code execution. More complex processing operations have higher performance penalties so you should approach running any code on Data Grid clusters with careful planning. Start out by testing your code and performing multiple execution runs on a smaller, sample data set. After you gather some metrics you can start identifying optimizations and understanding what performance implications of the code you're running. One definite consideration is that long running processes can start having a negative impact on normal read and write operations. So it is imperative that you monitor your deployment over time and continually assess performance. Embedded caches With embedded caches, Data Grid provides two APIs that let you execute code in the same memory space as your data. ClusterExecutor API Lets you perform any operation with the Cache Manager, including iterating over the entries of one or more caches, and gives you processing based on Data Grid nodes. CacheStream API Lets you perform operations on collections and gives you processing based on data. If you want to run an operation on a single node, a group of nodes, or all nodes in a certain geographic region, then you should use clustered execution. If you want to run an operation that guarantees a correct result for your entire data set, then using distributed streams is a more effective option. Cluster execution ClusterExecutor clusterExecutor = cacheManager.executor(); clusterExecutor.singleNodeSubmission().filterTargets(policy); for (int i = 0; i < invocations; ++i) { clusterExecutor.submitConsumer((cacheManager) -> { TransportConfiguration tc = cacheManager.getCacheManagerConfiguration().transport(); return tc.siteId() + tc.rackId() + tc.machineId(); }, triConsumer).get(10, TimeUnit.SECONDS); } CacheStream Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains("JBoss")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); Additional resources org.infinispan.manager.ClusterExecutor org.infinispan.CacheStream Remote caches For remote caches, Data Grid provides a ServerTask API that lets you register custom Java implementations with Data Grid Server and execute tasks programmatically by calling the execute() method over Hot Rod or by using the Data Grid Command Line Interface (CLI). You can execute tasks on one Data Grid Server instance only or all server instances in the cluster. 2.15. Client traffic When sizing remote Data Grid clusters, you need to calculate the number and size of entries but also the amount of client traffic. Data Grid needs enough RAM to store your data and enough CPU to handle client read and write requests in a timely manner. There are many different factors that affect latency and determine response times. For example, the size of the key/value pair affects the response time for remote caches. Other factors that affect remote cache performance include the number of requests per second that the cluster receives, the number of clients, as well as the ratio of read operations to write operations. | [
"Data set size = Number of entries * (Average key size + Average value size + Memory overhead)",
"Distributed data set size = Data set size * (Number of owners + 1)",
"Distributed data set size <= Available memory per node * Minimum number of nodes",
"Planned nodes = Minimum number of nodes + Number of owners - 1 Distributed data set size <= Available memory per node * (Planned nodes - Number of owners + 1)",
"Data set size = 1_000_000 * 10KB = 10GB Distributed data set size = (3 + 1) * 10GB = 40GB 40GB <= 4GB * Minimum number of nodes Minimum number of nodes >= 40GB / 4GB = 10 Planned nodes = 10 + 3 - 1 = 12",
"Number of segments = 20 * Number of nodes",
"@ClientListener public class TemperatureChangesListener { private String location; TemperatureChangesListener(String location) { this.location = location; } @ClientCacheEntryCreated public void created(ClientCacheEntryCreatedEvent event) { if(event.getKey().equals(location)) { cache.getAsync(location) .whenComplete((temperature, ex) -> System.out.printf(\">> Location %s Temperature %s\", location, temperature)); } } }",
"QueryFactory queryFactory = Search.getQueryFactory(playersScores); Query topTenQuery = queryFactory .create(\"from com.redhat.PlayerScore ORDER BY p.score DESC, p.timestamp ASC\") .maxResults(10); List<PlayerScore> topTen = topTenQuery.execute().list();",
"SELECT field1, field2 FROM Entity WHERE x AND y",
"LON NYC k1=(n/a) 0,0 0,0 k1=2 1,0 --> 1,0 k1=2 k1=3 1,1 <-- 1,1 k1=3 k1=5 2,1 1,2 k1=8 --> 2,1 (conflict) (conflict) 1,2 <--",
"ClusterExecutor clusterExecutor = cacheManager.executor(); clusterExecutor.singleNodeSubmission().filterTargets(policy); for (int i = 0; i < invocations; ++i) { clusterExecutor.submitConsumer((cacheManager) -> { TransportConfiguration tc = cacheManager.getCacheManagerConfiguration().transport(); return tc.siteId() + tc.rackId() + tc.machineId(); }, triConsumer).get(10, TimeUnit.SECONDS); }",
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains(\"JBoss\")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_performance_and_sizing_guide/deployment-planning |
Specialized hardware and driver enablement | Specialized hardware and driver enablement OpenShift Container Platform 4.15 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/specialized_hardware_and_driver_enablement/index |
Using automated rules on Cryostat | Using automated rules on Cryostat Red Hat build of Cryostat 3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_automated_rules_on_cryostat/index |
Installing on a single node | Installing on a single node OpenShift Container Platform 4.14 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_a_single_node/index |
D.19. Guides View | D.19. Guides View To open Teiid Designer's Guides View , click the main menu's Window > Show View > Other... and then click the Teiid Designer > Guides view in the dialog. The Guides view provides assistance for many common modeling tasks. The view includes categorized Modeling Actions and also links to Cheat Sheets for common processes. Cheat Sheets are an eclipse concept for which Teiid Designer has provided contributions (see Cheat Sheets view). The Guides view is shown below: Figure D.31. Guides View The upper Action Sets section provides categorized sets of actions. Select the desired category in the drop-down, then the related actions for the selected category are displayed in the list below it. Execute an action by clicking the Execute selected action link or double-clicking the action. The lower Cheat Sheets section provides a list of available Cheat Sheet links, which will launch the appropriate Cheat Sheet to guide you step-by-step through the selected process. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/guides_view |
14.5. More Than a Secure Shell | 14.5. More Than a Secure Shell A secure command-line interface is just the beginning of the many ways SSH can be used. Given the proper amount of bandwidth, X11 sessions can be directed over an SSH channel. Or, by using TCP/IP forwarding, previously insecure port connections between systems can be mapped to specific SSH channels. 14.5.1. X11 Forwarding To open an X11 session over an SSH connection, use a command in the following form: For example, to log in to a remote machine named penguin.example.com with john as a user name, type: When an X program is run from the secure shell prompt, the SSH client and server create a new secure channel, and the X program data is sent over that channel to the client machine transparently. X11 forwarding can be very useful. For example, X11 forwarding can be used to create a secure, interactive session of the Printer Configuration utility. To do this, connect to the server using ssh and type: The Printer Configuration Tool will appear, allowing the remote user to safely configure printing on the remote system. Please note that X11 Forwarding does not distinguish between trusted and untrusted forwarding. | [
"ssh -Y username @ hostname",
"~]USD ssh -Y [email protected] [email protected]'s password:",
"~]USD system-config-printer &"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-ssh-beyondshell |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.